Innovations in Artificial Intelligence, Predictive Analytics, and BIM (2019) – ResearchAndMarkets.com – Yahoo Finance

The "Innovations in Artificial Intelligence, Predictive Analytics, and BIM" report has been added to ResearchAndMarkets.com's offering.

This edition of IT, Computing and Communications (ITCC) TechVision Opportunity Engine (TOE) provides a snapshot of the emerging ICT led innovations in artificial intelligence, predictive analytics, and building information modelling. This issue focuses on the application of information and communication technologies in alleviating the challenges faced across industry sectors in areas such as retail, agriculture, construction, healthcare, and industrial sectors.

ITCC TOE's mission is to investigate emerging wireless communication and computing technology areas including 3G, 4G, Wi-Fi, Bluetooth, Big Data, cloud computing, augmented reality, virtual reality, artificial intelligence, virtualization and the Internet of Things and their new applications; unearth new products and service offerings; highlight trends in the wireless networking, data management, and computing spaces; provide updates on technology funding; evaluate intellectual property; follow technology transfer and solution deployment/integration; track development of standards and software; and report on legislative and policy issues and many more.

The Information & Communication Technology cluster provides global industry analysis, technology competitive analysis, and insights into game-changing technologies in wireless communication and computing space. Innovations in ICT have deeply permeated various applications and markets.

These innovations have a profound impact on a range of business functions for computing, communications, business intelligence, data processing, information security, workflow automation, quality of service (QoS) measurements, simulations, customer relationship management, knowledge management functions and many more. The global teams of industry experts continuously monitor technology areas such as Big Data, cloud computing, communication services, mobile and wireless communication space, IT applications & services, network security, and unified communications markets. In addition, we also closely look at vertical markets and connected industries to provide a holistic view of the ICT Industry.

Key Topics Covered:

Innovations in Artificial Intelligence, Predictive Analytics, and BIM

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/kmqkj0

View source version on businesswire.com: https://www.businesswire.com/news/home/20200320005350/en/

Contacts

ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.com For E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

Continued here:
Innovations in Artificial Intelligence, Predictive Analytics, and BIM (2019) - ResearchAndMarkets.com - Yahoo Finance

Researchers use Artificial Intelligence to predict drug response in lung cancer therapies – EdexLive

Image used for representational purpose only (Pic: Google Images)

Researchers have used Artificial Intelligence (AI) to train algorithms and predict tumour sensitivity in three advanced non-small cell lung cancer therapies which can help predict more accurate treatment efficacy at an early stage of the disease.

The researchers at Columbia University's Irving Medical Center analysed CT images from 92 patients receiving drug agent nivolumab in two trials; 50 patients receiving docetaxel in one trial, and 46 patients receiving gefitinib in one trial.

To develop the model, the researchers used the CT images taken at baseline and on first-treatment assessment.

"The purpose of this study was to train cutting-edge AI technologies to predict patients' responses to treatment, allowing radiologists to deliver more accurate and reproducible predictions of treatment efficacy at an early stage of the disease," explained Laurent Dercle, associate research scientist at the Columbia University Irving Medical Center.

Radiologists currently quantify changes in tumour size and the appearance of new tumour lesions.

However, this type of evaluation can be limited, especially in patients treated with immunotherapy, who can display atypical patterns of response and progression.

"Newer systemic therapies prompt the need for alternative metrics for response assessment, which can shape therapeutic decision-making," Dercle said in a paper appeared in the journal Clinical Cancer Research.

The researchers used machine learning to develop a model to predict treatment sensitivity in the training cohort.

Each model could predict a score ranging from zero (highest treatment sensitivity) to one (highest treatment insensitivity) based on the change of the largest measurable lung lesion identified at baseline.

"We observed that similar radiomics features predicted three different drug responses in patients with advanced non-small cell lung cancer (NSCLC)," Dercle said.

"With AI, cancer imaging can move from an inherently subjective tool to a quantitative and objective asset for precision medicine approaches," he added.

Link:
Researchers use Artificial Intelligence to predict drug response in lung cancer therapies - EdexLive

The Future of Work – TDWI

The Future of Work

Artificial intelligence is on the horizon and is set to change the way we work. Who will be affected and what skills can you develop to insulate yourself?

We are on the cusp of a potentially historic change. Artificial intelligence, in all its varieties and fields of study, is permeating our society and fundamentally altering how business is performed. Companies large and small are looking to artificial intelligence to fundamentally shift how they do business and how they can stay competitive in today's economy.

Some are questioning whether the potentially transformative changes will all be beneficial to society. They look at what this could mean to employees whose jobs are being displaced by the implementation of human-augmenting technology. Economists are split as to whether these changes will lead to large-scale unemployment or whether this is an evolutionary period for skills in the economy, as was the Industrial Revolution when the agrarian-based economy transitioned to an industrialized one. As we look back, we view this shift in a positive light, but those going through it had the same types of fears and concerns that employees have today.

As an individual, what can you do to make sure you are ready for the future of work?

AI Technologies to Watch

First, it is important to understand what technologies are leading the way for this artificial intelligence revolution and how these technologies impact jobs.

Computer Vision

What is it?

Computer vision is a field of study that teaches computers to see and process what they are seeing. This is accomplished by breaking images down into patterns of pixels and translating these patterns into classifications of objects. Once the computer can categorize what it is looking at, it can use this information to perform follow-on activities. Computer vision is the basis for new technologies such as facial recognition, autonomous vehicles, and visual anomaly detection.

What jobs does it impact?

Jobs that heavily leverage sight as a predominant aspect of their job -- especially when paired with repetitive processing based on what is being seen -- are most at risk of having all or part of their job replaced by computer vision.

Robotic Process Automation

What is it?

Robotic process automation is software that learns from a user's repetitive tasks and can mimic these actions after a period of training. This could include monitoring tasks such as keystrokes and mouse clicks that a user performs. In the background, as this software is monitoring user behavior, it is automatically configuring itself to continue with the same task or similar tasks in perpetuity. This automated processing of repetitive tasks can greatly increase the speed and accuracy of many business processes.

What jobs does it impact?

Jobs that require users to do the same task repeatedly throughout the day are most at risk of having all or part of their job replaced by robotic process automation.

Natural Language Processing

What is it?

Natural language processing includes multiple subfields, each focused on the interpretation and creation of text in a format that is natural to the way we communicate. This includes speech-to-text, text-to-speech, language translation, natural language understanding, and natural language generation. Like the way computer vision uses patterns of pixels to make decisions, natural language processing uses patterns of words to infer meaning and drive decisions from this meaning.

What jobs does it impact?

Jobs that use speech to accept or fulfill orders or provide services are most at risk of having all or part of their job replaced by natural language processing. With natural language generation, jobs involved in the creation of text content are also at great risk of having all or part of their job replaced.

Learning Resilient Attitudes

Given these technologies, what can we do to enhance our skills so we are prepared to evolve our jobs to a higher level as machines and automation replace the repetitive aspects of our work? Here are three approaches that are in high demand today that are resilient because they cannot be easily replaced by artificial intelligence.

Design Thinking

Design thinking is an iterative process that includes understanding users, their behaviors, and their journey through business processes; challenging existing assumptions and constraints that have marred their experience; and redefining problems to identify alternative strategies and solutions. The goal of design thinking is to find solutions that are not instantly apparent with our initial level of understanding of the situation but manifest themselves when patterns are isolated and viewed from different points of view. Design thinking includes empathy with your users, questioning assumptions, brainstorming new ideas, prototyping, and testing solutions.

Growth Mindset

People with a growth mindset subscribe to the theory that failure is not bad but rather is an opportunity to grow. People with this skill and outlook on life view the world differently than those with a fixed mindset. They focus on continuous learning. They learn from failures, feedback, experimentation, and the successes of others. They see challenges less as barriers to success and more as opportunities to discover new abilities to master. Those with a growth mindset don't fear the implementation of artificial intelligence in part of their job -- instead, they look at it as an opportunity to free up time wasted on repetitive tasks and focus on new learning, driving them to higher-value skills.

Digital Dexterity

Digital dexterity is the desire and ability to embrace existing and emerging technologies to achieve better business outcomes. It's a matter of both attitude and skills. This includes understanding how and where artificial intelligence can be implemented in business processes to drive target business objectives. Digital dexterity is tightly aligned with both design thinking and growth mindset and includes the identification and implementation of technology that can transform discovered ideas into reality.

A Final Word

As the economy is on the precipice of a revolutionary shift driven by artificial intelligence, there is significant anxiety and fear among workers. The threat of job loss weighs heavily on society. The best way to free yourself from this burden is to better understand what technologies are involved in this shift and what aspects of existing jobs they most threaten. Apply this knowledge -- acquire the new skills and aptitudes to ensure you do not become a victim of the shifting economy.

About the Author

Troy Hiltbrand is the chief digital officer at Kyni where he is responsible for digital strategy and transformation. You can reach the author at thiltbrand@kyanicorp.com.

More here:
The Future of Work - TDWI

The Influence of Artificial Intelligence on Future Education – Modern Diplomacy

While the market for facial recognition toolsand services is expected to more than double in value to $7bn by 2024, there have been repeated calls by politicians and civil rightsagencies safeguard against potential misuse of the technology. Biometricmonitoring and susceptibility to unfair bias are primary concerns, along withthe lack of industry standards that are a barrier to companies and governmentsdeploying the technologys potential benefits.

To help organizations tackle this challenge, theWorld Economic Forum released the first framework for the safe and trustworthyuse of facial recognition technology. The Framework for ResponsibleLimits on Facial Recognition was built by the Forum, industry actors, policy makers, civil societyrepresentatives and academics. It is meant to be deployed and tested as a toolto mitigate risks from potential unethical practices of the technology.

Although the progress in facial recognitiontechnology has been considerable over the past few years, ethical concerns havesurfaced regarding its limitations, said Kay Firth-Butterfield, Head ofArtificial Intelligence and Machine Learning at the World Economic Forum. Ourambition is to empower citizens and representatives as they navigate thedifferent trade-offs they will face along the way.

This is the first framework to go beyond generalprinciples and to operationalize use cases for two distinct audiences:engineering teams and policy makers. Members of the working group have playedtwo complementary roles:

The first are contributors: industryrepresentatives (Groupe ADP, Amazon Web Services, IDEMIA, IN Groupe,Microsoftand SNCF,); policy makers (members of the French Parliament, OPECST,);academics; civil society organizations; and AFNOR Certification. The second areobservers: the French Data Protection Authority (Commission Nationale delinformatique et des liberts CNIL) and the French Digital Council (ConseilNational du Numrique).

I support the idea of a bill at the FrenchParliament to enable this kind of experiment, which is essential to inform thepublic debate on facial recognition technology, said Didier Baichere, FrenchMP. More specifically, this bill aims to define the scope, objectives,stakeholders, and territories where such an experiment could be conducted aswell as the requirements for an informed and inclusive public consultation topromote public knowledge of the opportunities and the limits of facialrecognition technology.

Recent scientific progress, both in artificialintelligence and in computer vision more specifically, has enabled, in just afew years, a significant breakthrough in areas related to facial recognition,said Jean-Luc Dugelay, computer vision researcher at EURECOM Sophia Antipolis.For that reason, I believe that it is essential to accompany these advances inscience with a global policy reflection on the appropriate use of thistechnology; through a multistakeholder collaboration that involves academics,engineers, technology providers, and users, policy-makers, lawyers andcitizens.

The need for shared landmarks for artificialintelligence in general, and its application for facial recognition inparticular is primordial. Considers Olivier Peyrat, Chief Executive Officer ofAFNOR group. I consider positive all collective initiatives aimed at promotingtransparency, the sharing of the same language, precise and unequivocal, aswell as the definition of measures of confidence. The challenge is to createconditions accepted by public actors, private actors and citizens, to makepossible the development and the implementation of these new technologies in aserene environment.

This framework is structured around four steps:

Define what constitutes the responsible use of facial recognition through thedrafting of a set of principles for action. These principles focus on privacy,bias mitigation, the proportional use of the technology, accountability,consent, right to accessibility, childrens rights and alternative options.

Design best practices, to support product teams in the development of systems thatare responsible by design, focusing on four main dimensions: justify the useof facial recognition, design a data plan that matches end-usercharacteristics, mitigate the risks of biases, and inform end-users.

Assess to what extent the system designed is responsible, through an assessmentquestionnaire that describes what rules should be respected for each use caseto comply with the principles for action

Validate compliance with the principle for action through the design of an auditframework by a trusted third party (AFNOR Certification for the policy pilot).

France joined the World Economic Forum Centrefor the Fourth Industrial Revolution in January 2019. The framework wasco-designed by a fellow from the French government in residence at the Centre.

Related

Here is the original post:
The Influence of Artificial Intelligence on Future Education - Modern Diplomacy

Artificial Intelligence can better predict drug response to lung cancer therapies – The Sentinel Assam

NEW YORK: Researchers have used Artificial Intelligence (AI) to train algorithms and predict tumor sensitivity in three advanced non-small cell lung cancer therapies which can help predict more accurate treatment efficacy at an early stage of the disease.

The researchers at Columbia Universitys Irving Medical Center analyzed CT images from 92 patients receiving drug agent nivolumab in two trials; 50 patients receiving docetaxel in one trial; and 46 patients receiving gefitinib in one trial.

To develop the model, the researchers used the CT images taken at baseline and on first-treatment assessment.

The purpose of this study was to train cutting-edge AI technologies to predict patients responses to treatment, allowing radiologists to deliver more accurate and reproducible predictions of treatment efficacy at an early stage of the disease, explained Laurent Dercle, an associate research scientist at the Columbia University Irving Medical Center.

Radiologists currently quantify changes in tumor size and the appearance of new tumor lesions.

However, this type of evaluation can be limited, especially in patients treated with immunotherapy, who can display atypical patterns of response and progression.

Newer systemic therapies prompt the need for alternative metrics for response assessment, which can shape therapeutic decision-making,

Dercle said in a paper appeared in the journal Clinical Cancer Research.

The researchers used machine learning to develop a model to predict treatment sensitivity in the training cohort.

Each model could predict a score ranging from zero (highest treatment sensitivity) to one (highest treatment insensitivity) based on the change of the largest measurable lung lesion identified at baseline.

We observed that similar radionics features predicted three different drug responses in patients with advanced non-small cell lung cancer (NSCLC), Dercle said.

With AI, cancer imaging can move from an inherently subjective tool to a quantitative and objective asset for precision medicine approaches, he added. (IANS)

Also Read: Researchers found Artificial intelligence can improve diagnosis, treatment of sleep disorders

Also Watch:Coronavirus update: Buddhist Monastery in Naharkatika take extra prevention measures

Original post:
Artificial Intelligence can better predict drug response to lung cancer therapies - The Sentinel Assam

EdgeTier wants AI to get along with customer service agents rather than replace them – Fora.ie

Founders: Ciarn Tobin, Bart Lehane and Shane LynnElevator pitch: Artificial intelligence for customer serviceFunding: 1.5 million in seed fundingStatus: Customers in banking, insurance and e-commerce

THE FOUNDERS OF artificial intelligence startup EdgeTier might come from technical backgrounds in data science, but that doesnt mean their expertise cant be applied to an everyday problem.

The Dublin business is building artificial intelligence tools that customer service agents to simplify how they deal with queries and access the information they need.

Co-founder and chief executive Shane Lynn admits that EdgeTier has thrown its hat into a crowded space but he and co-founders Ciarn Tobin and Bart Lehane felt they could still offer something unique.

In Ireland there are tens of thousands of people working in customer service, meaning theres an appetite among these companies to improve their processes as much as possible.

We spotted and looked for common problems between the different customer service organisations and thats where the idea for the product that were now building and selling came about, Lynn told Fora.

Still a lot of customer service organisations arent running efficiently. There are people doing things that computers are good at and there are people expecting computers to do things that humans are good at, he said.

Human touch

The best parts of the customer are the human parts, he added, whether thats understanding a complex question, showing empathy for a situation or negotiating a compromise.

Source: Shutterstock/Production Perig

These are human communication traits that a customer service AI cannot fully replicate.

Theyre very hard to fake and theyre very hard to implement with a computer system.

According to Lynn, the push for automation using AI can sometimes expect too much of these systems.

Full automation is not quite there and perhaps wont be for some time if customer service agents want to maintain the human element.

What can be automated is the mundane and repetitive, such as looking up specific data or answering specific questions that have definite answers what time does the store close at? or how much does that product cost?.

All of that work isnt that valuable from a customer experience point of view, it is valuable because it needs to be done but its not really what humans are good at. Humans are good at the communications piece, Lynn said.

EdgeTiers solution, dubbed Arthur, guides the customer service agent to and through the information they need to more accurately assist the customer with their specific query and to address complicated needs that require human understanding.

One example could be in travel, where a customer accidentally double books a journey and wants a refund on one booking but not the other and maybe wants to change one detail while keeping the rest.

Loads of these agents are working and what we want to do is free up their time by letting the computer do the bits that the computers are good at and letting the human concentrate on the actual communication between the business and the customer and get to the nub of the problem.

Next steps

To date, EdgeTier is working with companies in Ireland and the UK as well as a bank in Hungary. Its customers are usually in the travel, e-commerce, insurance and banking sectors.

Source: Conor McCabe Photography Ltd

Its revenue stream is a licensing model based on the number of customer interactions that run through the system.

The startup raised 1.5 million last year to finance its growth push and acquire new customers. The seed round was led by London venture capital firm Episode 1 with participation from Act Venture Capital and Enterprise Ireland.

Lynn said that for companies cutting through the noise in AI hype, the opportunity is significant.

While it sounds cool, its not particularly reliable that an AI is learning from previous agents behaviour because you have no control over what the quality of that behaviour is, he said.

If an AI is simply learning from the customer service agents practices, it may be picking up bad answers too and learning them. As the saying goes, garbage in, garbage out.

We sit down with senior agents and we extract and work with them to embody what is the best practice in these particular instances and hone and fine-tune it, Lynn said. We pick whoever is the top-performing agent and we essentially encapsulate their specific knowledge of the system.

Get our Daily Briefing with the mornings most important headlines for innovative Irish businesses.

Follow this link:
EdgeTier wants AI to get along with customer service agents rather than replace them - Fora.ie

Putting Artificial Intelligence to Work in the Lab – Lab Manager Magazine

Dr. Agustin Schiffrin and his team at the School of Physics and Astronomy at Monash University).

FLEET

An Australian-German collaboration has demonstrated fully-autonomous scanning probe microscopy (SPM) operation, applying artificial intelligence and deep learning to remove the need for constant human supervision.

The new system, dubbed DeepSPM, bridges the gap between nanoscience, automation, and artificial intelligence (AI), and firmly establishes the use of machine learning for experimental scientific research.

Image acquired by scanning tunneling microscopy (STM): individual silver atoms on a crystalline metal surface.

FLEET

"Optimizing SPM data acquisition can be very tedious. This optimization process is usually performed by the human experimentalist, and is rarely reported," says ARC Centre of Excellence in Future Low-Energy Electronics Technologies (FLEET) chief investigator Dr. Agustin Schiffrin of Monash University.

"Our new AI-driven system can operate and acquire optimal SPM data autonomously, for multiple straight days, and without any human supervision."

The advance brings advanced SPM methodologies such as atomically-precise nanofabrication and high-throughput data acquisition closer to a fully automated turnkey application.

The new deep learning approach can be generalized to other SPM techniques. The researchers have made the entire framework publicly available online as open source, creating an important resource for the nanoscience research community.

Image acquired by atomic force microscopy (AFM): a single molecule, similar to chlorophyll.

FLEET

"Crucial to the success of DeepSPM is the use of a self-learning agent, as the correct control inputs are not known beforehand," says Dr. Cornelius Krull, project co-leader.

"Learning from experience, our agent adapts to changing experimental conditions and finds a strategy to maintain the system stable," says Krull, who works with Shiffrin at Monash School of Physics and Astronomy.

The AI-driven system begins with an algorithmic search of the best sample regions and proceeds with autonomous data acquisition.

It then uses a convolutional neural network to assess the quality of the data. If the quality of the data is not good, DeepSPM uses a deep reinforcement learning agent to improve the condition of the probe.

DeepSPM can run for several days, acquiring and processing data continuously, while managing SPM parameters in response to varying experimental conditions, without any supervision.

The study demonstrates fully autonomous, long-term SPM operation for the first time by combining:

- This press release was originally published on theARC Centre of Excellence in Future Low-Energy Electronics Technologies website

Read the rest here:
Putting Artificial Intelligence to Work in the Lab - Lab Manager Magazine

I asked eight chatbots if I had Covid-19. The answers varied widely – STAT

U.S. hospitals, public health authorities, and digital health companies have quickly deployed online symptom checkers to screen patients for signs of Covid-19. The idea is simple: By using a chatbot powered by artificial intelligence, they can keep anxious patients from inundating emergency rooms and deliver sound health advice from afar.

Or at least that was the pitch.

Late last week, a colleague and I drilled more than a half-dozen chatbots on a common set of symptoms fever, sore throat, runny nose to assess how they worked and the consistency and clarity of their advice. What I got back was a conflicting, sometimes confusing, patchwork of information about the level of risk posed by these symptoms and what I should do about them.

advertisement

A chatbot posted on the website of the Centers for Disease Control and Prevention determined that I had one or more symptom(s) that may be related to COVID-19 and advised me to contact a health care provider within 24 hours and start home isolation immediately.

But a symptom checker from Buoy Health, which says it is based on current CDC guidelines, found that my risk of a serious Novel Coronavirus (COVID-19) infection is low right now and told me to keep monitoring my symptoms and check back if anything changes. Others concluded I was at medium risk or might have the infection.

advertisement

Most people will probably consult just one of these bots, not eight different versions as I did. But experts on epidemiology and the use of artificial intelligence in medicine said the wide variability in their responses undermines the value of automated symptom checkers to advise people at a time when above all else they are looking for reliable information and clear guidance.

These tools generally make me sort of nervous because its very hard to validate how accurate they are, said Andrew Beam, an artificial intelligence researcher in the department of epidemiology at the Harvard TH Chan School of Public Health. If you dont really know how good the tool is, its hard to understand if youre actually helping or hurting from a public health perspective.

The rush to deploy these chatbots underscores a broader tension in the coronavirus outbreak between the desire of technology companies and digital health startups to pitch new software solutions in the face of a fast-moving and unprecedented crisis, and the solemn duty of medical professionals to ensure that these interventions truly benefit patients and dont cause harm or spread misinformation. A 2015 study published by researchers at Harvard and several Boston hospitals found that symptom checkers for a range of conditions often reach errant conclusions when used for triage and diagnosis.

Told of STATs findings, Buoys chief executive, Andrew Le, said he would synchronize the companys symptom checker with the CDCs. Now that they have a tool, we are going to use it and adopt the same kind of screening protocols that they suggest and put it on ours, he said. This is probably just a discrepancy in time, because weve been attending all of their calls and trying to stay as close to their guidelines as possible.

The CDC did not respond to a request for comment.

Before I continue, I should note that neither I nor my colleague is feeling ill. We devised a simple test to assess the chatbots and limited the experiment to the web- and smartphone-based tools themselves so as not to waste the time of front-line clinicians. We chose a set of symptoms that were general enough to be any number of things, from a common cold, to the flu, to yes, coronavirus. The CDC says the early symptoms of Covid-19 are fever, cough, and shortness of breath.

The differences in the advice we received are understandable to an extent, given that these chatbots are designed for slightly different purposes some are meant to determine the risk of coronavirus infection, and others seek to triage patients or assess whether they should be tested. They also collect and analyze different pieces of information. Buoys bot asked me more than 30 questions, while Cleveland Clinics and bots created by several other providers posed fewer than 10.

But the widely varying recommendations highlighted the difficulty of distinguishing coronavirus from more common illnesses, and delivering consistent advice to patients.

The Cleveland Clinics tool determined that I was at medium risk and should either take an online questionnaire, set up a virtual visit, or call my primary care physician. Amy Merino, a physician and the clinics chief medical information officer, said the tool is designed to package the CDCs guidelines in an interactive experience. We do think that as we learn more, we can optimize these tools to enable patients to provide additional personal details to personalize the results, she said.

Meanwhile, another tool created by Verily, Alphabets life sciences arm, to help determine who in certain northern California counties should be tested for Covid-19, concluded that my San Francisco-based colleague, who typed in the same set of symptoms, was not eligible for testing.

But in the next sentence, the chatbot said: Please note that this is not a recommendation of whether you should be tested. In other words, a non-recommendation recommendation.

A spokeswoman for Verily wrote in an email that the language the company uses is meant to reinforce that the screening tool is complementary to testing happening in a clinical care situation. She wrote that more than 12,000 people have completed the online screening exam, which is based on criteria provided by the California Department of Public Health.

The challenge facing creators of chatbots is magnified when it comes to products that are built on limited data and guidelines that are changing by the minute, including which symptoms characterize infection and how patients should be treated. A non-peer-reviewed study published online Friday by researchers at Stanford University found that using symptoms alone to distinguish between respiratory infections was only marginally effective.

A week ago, if you had a chatbot that was saying, Here are the current recommendations, it would be unrecognizable from where we are today, because things have just moved so rapidly, said Karandeep Singh, a physician and professor at the University of Michigan who researches artificial intelligence and digital health tools. Everyone is rethinking things right now and theres a lot of uncertainty.

To keep up, chatbot developers will have to constantly update their products, which rely on branching logic or statistical inference to deliver information based on knowledge that is encoded into them. That means keeping up to date on new data that are being published every day on the number of Covid-19 cases in different parts of the world, who should be tested based on available resources, and the severity of illness it is causing in different types of people.

Differences I found in the information being collected by the chatbots seemed to reflect the challenges of keeping current. All asked if I had traveled to China or Iran, but thats where commonality ended. The Cleveland Clinic asked whether I had visited a single country in Europe Italy, which has the second most confirmed Covid-19 cases in the world while Buoy asked whether I had visited any European country. Providence St. Joseph Health, a hospital network based in Washington state, broke out a list of several countries in Europe, including Italy, Spain, France, and Germany.

After STAT inquired about limiting its chatbots focus to Italy, Cleveland Clinic updated its tool to include the United Kingdom, Ireland, and the 26 European countries included in the Schengen area.

The differences also included the symptoms they asked about and the granularity of information they were capable of collecting and analyzing. Buoys bot, which suggested I had a common cold, was able to collect detailed information, such as specific temperature ranges associated with my fever and whether my sore throat was moderate or severe.

But Providence St. Joseph asked only whether I had experienced any one of several symptoms, including fever, sore throat, runny nose, cough, or body aches. I checked yes to that question, and no to queries about whether I had traveled to an affected country or come in contact with someone with a lab-confirmed case of Covid-19. The bot (built, like the CDC one, with tools from Microsoft) offered the following conclusion: You might be infected with the coronavirus. Please do one of the following call your primary care physician to schedule an evaluation or call 911 for a life threatening emergency.

All of the chatbots I consulted included some form of disclaimer urging users to contact their doctors or otherwise consult with medical professionals when making decisions about their care. But the fact that most offered a menu of fairly obvious options about what I should do seemed to undercut the value of the exercise.

Beam, the professor at Harvard, said putting out inaccurate or confusing information in the middle of a public health crisis can result in severe consequences.

If youre too sensitive, and youre sending everyone to the emergency room, youre going to overwhelm the health system, he said. Likewise, if youre not sensitive enough, you could be telling people who are ill that they dont need emergency medical care. Its certainly no replacement for picking up the phone and calling your primary care physician.

If anyone would be enthusiastic about the possibilities of deploying artificial intelligence in epidemiology, Beam would be the guy. His research is focused on applying AI in ways that help improve the understanding of infectious diseases and the threat they pose. And even though he said the effort to deploy automated screening tools is well intentioned and that digital health companies can help stretch resources in the face of Covid-19 he cautioned providers to be careful not to get ahead of the technologys capabilities.

My sense is that we should err to the centralized expertise of public health experts instead of giving people 1,000 different messages they dont know what to do with, he said. I want to take this kind of technology and integrate it with traditional epidemiology and public health techniques.

In the long run Im very bullish on these two worlds becoming integrated with one another, he added. But were not there yet.

Erin Brodwin contributed reporting.

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from theCommonwealth Fund.

Link:
I asked eight chatbots if I had Covid-19. The answers varied widely - STAT

An AI that mimics how mammals smell recognizes scents better than other AI – Science News

When it comes to identifying scents, a neuromorphic artificialintelligence beats other AI by more than a nose.

The new AI learns to recognize smells more efficiently and reliablythan other algorithms. And unlike other AI, this system can keep learning newaromas without forgetting others, researchers report online March 16 in NatureMachine Intelligence. The key to the programs success is its neuromorphicstructure, which resembles the neural circuitry in mammalian brains more thanother AI designs.

This kind of algorithm, which excels at detecting faint signalsamidst background noise and continually learning on thejob, could someday be used for air quality monitoring, toxic waste detection ormedical diagnoses.

The new AI is an artificialneural network, composed of many computing elements that mimic nerve cells toprocess scent information (SN: 5/2/19). The AI sniffs by taking inelectrical voltage readouts from chemical sensors in a wind tunnel that wereexposed to plumes of different scents, such as methane or ammonia. When the AIwhiffs a new smell, that triggers a cascade of electrical activity among its nervecells, or neurons, which the system remembers and can recognize in the future.

Headlines and summaries of the latest Science News articles, delivered to your inbox

Like the olfactory system in the mammal brain, some of the AIsneurons are designed to react to chemical sensor inputs by emitting differentlytimed pulses. Other neurons learn to recognize patterns in those blips thatmake up the odors electrical signature.

This brain-inspired setup primes the neuromorphic AI for learningnew smells more than a traditional artificial neural network, which starts as auniform web of identical, blank slate neurons. If a neuromorphic neural networkis like a sports team whose players have assigned positions and know the rulesof the game, an ordinary neural network is initially like a bunch of randomnewbies.

As a result, the neuromorphic system is a quicker, nimbler study.Just as a sports team may need to watch a play only once to understand thestrategy and implement it in new situations, the neuromorphic AI can sniff asingle sample of a new odor to recognize the scent in the future, even amidstother unknown smells.

In contrast, a bunch of beginners may need to watch a play manytimes to reenact the choreography and still struggle to adapt it to futuregame-play scenarios. Likewise, a standard AI has to study a single scent samplemany times, and still might not recognize it when the scent is mixed up withother odors.

Thomas Cleland of Cornell University and Nabil Imam of Intel inSan Francisco pitted their neuromorphic AI against a traditional neural networkin a smell test of 10 odors. To train, the neuromorphic system sniffed a singlesample of each odor. The traditional AI underwent hundreds of training trialsto learn each odor. During the test, each AI sniffed samples in which a learnedsmell was only 20 to 80 percent of the overall scent mimicking real-worldconditions where target smells are often intermingled with other aromas. Theneuromorphic AI identified the right smell 92 percent of the time. The standardAI achieved 52 percent accuracy.

Priyadarshini Panda, a neuromorphic engineer at Yale University,is impressed by the neuromorphic AIs keen sense of smell in muddled samples.The new AIs one-and-done learning strategy is also moreenergy-efficient than traditional AI systems, which tend to be very powerhungry, she says (SN: 9/26/18).

Another perk of the neuromorphic setup is that the AI can keeplearning new smells after its original training if new neurons are added to thenetwork, similar to the way that new cells continually form in the brain.

As new neurons are added to the AI, they can become attuned to newscents without disrupting the other neurons. Its a different story fortraditional AI, where the neural connections involved in recognizing a certain odor,or set of odors, are more broadly distributed across the network. Adding a newsmell to the mix is liable to disturb those existing connections, so a typical AIstruggles to learn new scents without forgetting others unless its retrainedfrom scratch, using both the original and new scent samples.

To demonstrate this, Cleland and Imam trained their neuromorphicAI and a standard AI to specialize in recognizing toluene, which is used tomake paints and fingernail polish. Then, the researchers tried to teach theneural networks to recognize acetone, an ingredient of nail polish remover. Theneuromorphic AI simply added acetone to its scent-recognition repertoire, butthe standard AI couldnt learn acetone without forgetting the smell of toluene.These kinds of memorylapses are a major limitation of current AI (SN: 5/14/19).

Continual learning seems to work well for the neuromorphic systemwhen there are few scents involved, Panda says. But what if you make it large-scale?In the future, researchers could test whether this neuromorphic system can learna much broader array of scents. But this is a good start, she says.

Read the original here:
An AI that mimics how mammals smell recognizes scents better than other AI - Science News

Artificial Intelligence is Becoming the Future of Investment Platforms – EnterpriseTalk

How can AI help in investment decisions? And if there are challenges, how does your platform help to resolve those challenges?

As to why investors in general need AI, there are enormous amounts of data out there, and there is an ongoing battle over that available data. The industry as a whole now produces all kinds of data-based financial reporting and statements, and investors and industry players alike can buy really well-structured data as a result. AI has the ability to study massive amounts of this data and identify patterns.

Let us assume we identified a stock pattern today, and we want to figure out what to do next: buy or sell. AI can find somewhat similar patterns that existed in history and then analyze what happened right after. Knowing what happened after the pattern in the past may suggest what may happen in the future from today.

How Bots Are Altering the Future of Enterprise

We can identify patterns for stocks, Forex, ETFs, mutual funds, and even currencies. With that said, some patterns will not work for certain stocks; that is why people need a complete picture, including discovery, testing, and a presentation of results.

What if there is a challenge and if they are having a problem identifying the patterns? How does then AI support this kind of investor?

Challenges can also be patterns. Let us assume there is a significant drop in the market today; AI can go back through historical data and find similar significant drops in the market to come to pattern-based conclusions, such as which particular stocks continue to go down and which stocks tend to quickly bounce back. And in that regard, AI helps to solve the challenges in conjunction with human involvement, where humans can take these signals and use them for making better trading decisions.

That perspective raises the question: can AI effectively trade or manage a portfolio without any human involvement? So far, there is only one recorded example, a hedge fund claiming no human involvement. In all other cases, at this moment, humans have some kind of involvement. Today, the best minds in the finance industry are working on solutions that can help interpret challenges or anomalies in the market, including significant drops or significant jumps. Beyond AI, many companies use robots to work on these solutions, too. They look at the expense ratio and come up with the best-case scenario we are talking about the fully automated robots which can solve the challenges that arise.

Are there any security challenges in data processing of this type?

Data security challenges are the same whether AI is involved or not you have to be secure either way. With that said, you do need to protect against the black swans when something unexpected happens, and the AI can react and perform a problematic money maneuver.

Voice-based AI Assistant Certainly the Future of Workplace

Think about the verification challenges when people put driverless cars on autopilot, and the driverless car sees something unexpected. There is a chance it will crash, like Tesla demonstrated recently when the human fully relied on autopilot. When it comes to AI and investing a lot of money could be on the line.

So you see AI as a future of investment platforms? How is your platform leveraging AI differently?

Ans: Yes, absolutely. It is an enormous amount of power, and no human being can compete with the speed and volume of this power when applied to trade.

Here is the main difference with AI in our approach: to make it convenient for our users, we test a lot of strategies in advance, and that means that a typical investor gets access to a secure cloud. In our secure, local cloud, we run a lot of pre-calculations over different strategies. We run tens of thousands of different strategies simultaneously. We dont know what is going to happen with these tens of thousands of strategies, but we know that if the user on our site wants to use one of them, then it is going to be pre-calculated. That way, the person has more immediate access to our data and analysis. And that is our main feature that a person can use our AI on request.

Rebirth of Industries in the Era of Intelligent Automation

See the rest here:
Artificial Intelligence is Becoming the Future of Investment Platforms - EnterpriseTalk