Artificial Intelligence in Aviation Market 2020 Size, Share Metrics, Growth Trends and Forecast to 2026 – Food & Beverage Herald

New Jersey, United States, Verified Market Research indicates that the Artificial Intelligence in Aviation Market is expected to surge at a steady rate in the coming years, as economies flourish. The research report, titled [Global Artificial Intelligence in Aviation Market Research Report 2020], provides a comprehensive review of the global market. Analysts have identified the key drivers and restraints in the overall market. They have studied the historical milestones achieved by the Global Artificial Intelligence in Aviation Market and emerging trends. A comparison of the two has enabled the analysts to draw a potential trajectory of the Global Artificial Intelligence in Aviation Market for the forecast period.

Global Artificial Intelligence in Aviation Market was valued at USD 0.11 Billion in 2017 and is projected to reach USD 1.8 Billion by 2025, growing at a CAGR of 45.3% from 2018 to 2025.

Request a Sample Copy of this report @https://www.verifiedmarketresearch.com/download-sample/?rid=3543&utm_source=FHN&utm_medium=005

Top 10 Companies in the Global Artificial Intelligence in Aviation Market Research Report:

Global Artificial Intelligence in Aviation Market: Competitive Landscape

Competitive landscape of a market explains strategies incorporated by key players of the market. Key developments and shift in management in the recent years by players has been explained through company profiling. This helps readers to understand the trends that will accelerate the growth of market. It also includes investment strategies, marketing strategies, and product development plans adopted by major players of the market. The market forecast will help readers make better investments.

Global Artificial Intelligence in Aviation Market: Drivers and Restrains

This section of the report discusses various drivers and restrains that have shaped the global market. The detailed study of numerous drivers of the market enable readers to get a clear perspective of the market, which includes market environment, government policies, product innovations, breakthroughs, and market risks.

The research report also points out the myriad opportunities, challenges, and market barriers present in the Global Artificial Intelligence in Aviation Market. The comprehensive nature of the information will help the reader determine and plan strategies to benefit from. Restrains, challenges, and market barriers also help the reader to understand how the company can prevent itself from facing downfall.

Global Artificial Intelligence in Aviation Market: Segment Analysis

This section of the report includes segmentation such as application, product type, and end user. These segmentations aid in determining parts of market that will progress more than others. The segmentation analysis provides information about the key elements that are thriving the specific segments better than others. It helps readers to understand strategies to make sound investments. The Global Artificial Intelligence in Aviation Market is segmented on the basis of product type, applications, and its end users.

Global Artificial Intelligence in Aviation Market: Regional Analysis

This part of the report includes detailed information of the market in different regions. Each region offers different scope to the market as each region has different government policy and other factors. The regions included in the report are North America, South America, Europe, Asia Pacific, and the Middle East. Information about different region helps the reader to understand global market better.

Ask for Discount @https://www.verifiedmarketresearch.com/ask-for-discount/?rid=3543&utm_source=FHN&utm_medium=005

Table of Content

1 Introduction of Artificial Intelligence in Aviation Market

1.1 Overview of the Market 1.2 Scope of Report 1.3 Assumptions

2 Executive Summary

3 Research Methodology of Verified Market Research

3.1 Data Mining 3.2 Validation 3.3 Primary Interviews 3.4 List of Data Sources

4 Artificial Intelligence in Aviation Market Outlook

4.1 Overview 4.2 Market Dynamics 4.2.1 Drivers 4.2.2 Restraints 4.2.3 Opportunities 4.3 Porters Five Force Model 4.4 Value Chain Analysis

5 Artificial Intelligence in Aviation Market, By Deployment Model

5.1 Overview

6 Artificial Intelligence in Aviation Market, By Solution

6.1 Overview

7 Artificial Intelligence in Aviation Market, By Vertical

7.1 Overview

8 Artificial Intelligence in Aviation Market, By Geography

8.1 Overview 8.2 North America 8.2.1 U.S. 8.2.2 Canada 8.2.3 Mexico 8.3 Europe 8.3.1 Germany 8.3.2 U.K. 8.3.3 France 8.3.4 Rest of Europe 8.4 Asia Pacific 8.4.1 China 8.4.2 Japan 8.4.3 India 8.4.4 Rest of Asia Pacific 8.5 Rest of the World 8.5.1 Latin America 8.5.2 Middle East

9 Artificial Intelligence in Aviation Market Competitive Landscape

9.1 Overview 9.2 Company Market Ranking 9.3 Key Development Strategies

10 Company Profiles

10.1.1 Overview 10.1.2 Financial Performance 10.1.3 Product Outlook 10.1.4 Key Developments

11 Appendix

11.1 Related Research

Request Customization of Report @ https://www.verifiedmarketresearch.com/product/global-artificial-intelligence-in-aviation-market-size-and-forecast-to-2025/?utm_source=FHN&utm_medium=005

Highlights of Report

About Us:

Verified market research partners with clients to provide insight into strategic and growth analytics; data that help achieve business goals and targets. Our core values include trust, integrity, and authenticity for our clients.

Analysts with high expertise in data gathering and governance utilize industry techniques to collate and examine data at all stages. Our analysts are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research reports.

Contact Us:

Mr. Edwyne Fernandes Call: +1 (650) 781 4080 Email: [emailprotected]

This post was originally published on Food and Beverage Herald

Read more:
Artificial Intelligence in Aviation Market 2020 Size, Share Metrics, Growth Trends and Forecast to 2026 - Food & Beverage Herald

Top five projections in Artificial Intelligence for 2020 – Economic Times

There has been good and bad news of AI in the year 2019. Of course, Bad News always get preference and catches peoples minds. Some of the popular bad news in AI has been related to Fake news Generation, Creating Porn fakes from Social Media images, Autonomous Vehicle killing a pedestrian, AI systems attacking a production facility, Data biases creating problems in AI applications. In the good news we have seen innovative Healthcare related applications being deployed in Hospitals, AI tools helping specially abled people, Robots being used in increasing set of domains, AI assistants and smart devices guiding people in day to day queries and chores. The speed of evolution, adoption and research in AI is accelerating. It will be important and essential for the society to know what lies ahead on the road so that we are prepared for the worst and hopeful for the best.

AI will come out of the Data Conundrum

Although one of the main drivers for the success story of AI in the last decade has been the availability of exponentially increasing data; now the data itself is becoming one of the key barriers in developing futuristic applications using AI. Advancements in the study of human intelligence also show that our species is very effective in adapting to unseen situations which contrasts with the current capabilities of AI.

In the past year, there has been a significant activity in AI research to tackle this issue. Specific progress has been made inReinforcement Learning Techniques that take care of the limitations of supervised learning methods requiring huge amount of data. Deep Minds recent achievement is on top of the success stories in this domain. The Star Craft-II system developed by them taking the throne of the Grandmaster is a game changer and an indicator of the tremendous progress and potential of this technology.Generating data through Simulation in past year and it will grow at a much faster pace in 2020

For many complex applications, it is almost impossible to have data of all variety related to different phenomenon of that problem. For example, Autonomous vehicles, Healthcare, Space research, Prediction of natural disasters, Video Generation are some of the areas where high quality simulation data will be much more effective. In most of these cases real historical data will be too limited to predict new situations that can occur in the future. E.g. Space research is coming out with new inventions every day and nullifying old assumptions; in such a scenario, any AI application using historical data is bound to fail. However, Simulations of new possibilities with high precision software applications can alter the direction of AI applications in these domains.

Even in the cases where the applications are starving for additional training with local data; Public and Private organizations are coming forward to share and collaborate for data requirements. The Leaders are becoming more conversant with the requirements of AI and the mindset is changing.

All these factors combined will have a dramatic effect in 2020 and will bring the dependency of AI on data to a lower level. AI will come out of the Data Cage.

Machine Generated Content with Artificial Intelligence takes over Crowd Intelligence

We have seen the prototypes and demonstrations of content generating Robots in the form of user reviews, news stories, Celebrity images, Funny Videos, Music Compositions, Short stories and Artistic paintings. This is going to become sophisticated with the advancement in self-Supervised learning led by NVIDIA, Google and Microsoft; which are pushing the boundaries to new frontiers.Most popular Online Retail stores, Food Portals, Hotel & Travel aggregators etc. are based on customer reviews. Till now, these were written by real customers and real humans. Most of us were putting our faith in the crowd and take their reviews at face value. This has become a key component in driving new sales in different business segments. So, we were relying on crowd intelligence. But with these new content generation Robots all such businesses will be flooded with AI generated reviews and it will be very easy to fool the Customer.

Another Critical area is the opinion formation regarding various news, events and issues concerning the society. Social media, online campaigns, Messaging through different mobile apps has become a key resource to build the public sentiment on important issues. This is another area which is facing an immediate danger of Artificial machines taking over human beings to form opinion.Next year this trend will consolidate and there will be a visible effect on democratic Governments. AI may become a key driver and a primary campaigner for elections. Those organizations, Individuals or parties having AI supremacy will be able to win the elections and drive the world.

The world will speak and understand one Language: The Language of AI

With the tremendous success and improvements brought by BERT and GPT-2, the Language translation is coming of age. People talking to anyone outside their community will be talking through Language of AI Middleware. In 2019, we have already seen devices which can help you converse with people speaking other languages. The offerings are going to become more qualitative and inclusive. More and more languages are being added with an amazing pace in such conversational devices. Impact of such technologies coming in mass usage will result in plethora of applications being developed resulting in great impact on business and society. Movement of people, skills and knowledge across borders with different language speakers will become more common. This will also bring transformation in the cinema, performing arts and travel industries. This phenomenon will also affect Higher education sector and affect different countries in diverse ways. It can prove to be an economic bonanza, or a disaster based on the way the countries plan and embrace the changes. Proactive leadership that understands the future impacts of these technologies will be crucial to bring a considered transition of society and happy future of these countries.

AI Boost for the powerful and AI poison for the under-privileged Groups

AI is working in the same way for the powerful as Industrialization and Digitalization. People with resources are deploying and utilizing new age technologies to their advantage. They have resources to invest in new applications and become first adopters of technology. The power of Artificial Intelligence is being combined to optimize Manufacturing and Energy Production. It is also being used to increase the efficiency of distribution networks, delivery chain, connectivity. Every big business is at progressing to further increase the AI adoption including Airlines, Shipping Corporations, Mining Companies and Infrastructure Conglomerates. Eventually AI is further intensifying the divide between the haves and have-nots. Common people are becoming pawns in the hands of AI applications. Their privacy is under attack. As the cost of labor is devalued due to automation and new technologies, the wealth will be owned by a tiny percentage of the people in the world.

Genuine Voices, Groups and organizations should strive for development of Technology with a Human face. Already, the UN and other groups are working for Sustainable development Goals. Now, it is time that a proper framework is put in place involving all stakeholders so that the pace and direction of technology remains under the control of humanity. It will involve developing comprehensive moral, ethical, legal and societal ecosystems governing the use, development and deployment of AI tools, technologies and applications.

Crazy increase in Defense Budgets for AI enabled Weaponization

Few countries in the World are already in the advanced stages of developing Lethal Autonomous Warfare Systems. Sea Hunter, An Autonomous Unmanned Surface Vehicle for Anti-submarine Warfare is already operational. China is in the final stages of deploying army of Micro Swarm Drone Systems which can launch suicidal incognito attacks on adversary infrastructure. Other Permanent Members of the Security Council are working on Holistic Warfare Systems which are fully integrated with other functions of the Government. With complex set of adversaries in place, Israel is working to use AI as a force multiplier and to take fast decisions in the prevailing nebulosity of hybridity. AI also helps greatly in asymmetric warfare.

AI has unlimited potential to launch Cybersecurity attacks of complex nature which will require adversaries to have superior AI capabilities to counter. As major Financial systems of the world are online including banks and stock markets, they may become easy targets of Future AI systems for blackmailing and threatening Govts.In recent years we have seen significant increase in AI related defense budgets to help AI enabled weaponization. This is going to further accelerate in the coming year(s). Precision attack on individuals, distribution and infrastructure networks of the countries will be enhanced by AI. We have already seen a precision attack powered by US and Israeli Cooperation in Iraq, which resulted in killing of Irans Top Commander.

With all these trends in pipeline it will be vital for the organizations, Countries and the world to set their AI strategy in place. To have competent people who are expert in AI will be indispensable and essential for the survival in this new decade. We will need people who understand both the human and machine operated ecosystems and can make emotionally sound judgements which are in the benefit of humanity.

DISCLAIMER : Views expressed above are the author's own.

Follow this link:
Top five projections in Artificial Intelligence for 2020 - Economic Times

Baidu looks to work with Indian institutions on AI – BusinessLine

Chinas largest search engine Baidu is looking to work with Indian institutions in future to make a better world through innovation, said Robin Li, Co-Founder, CEO and Chairman of Baidu.

India is one of the fastest growing smartphone markets in the world, and very large developing country, right next to China. Both the countries have been growing at a fast pace in the last few decades. For the next decade, we will be more optimistic, he said in a talk at IIT Madras tech fest, Shaastra 2020, titled Innovation in the age of Artificial Intelligence (AI).

Outside China, Baidu has presence in markets like Japan, Thailand and Egypt. However, the company's main product - search engine - is very much in China. Once in the age of AI, search will be very different from what is seen today. Once we transform the search into a different product, we will be ready to launch that internationally," he said without committing anything specific on foray into India.

Since founding Baidu in January 2000, Robin has led the company to be Chinas largest search engine with over 70 per cent market share. China is among the four countries globally - alongside the US, Russia and South Korea - to possess its own core search engine technology. Through innovations such as Box Computing to Baidus Open Data and Open App Platform, Robin has substantially advanced the theoretical framework of Chinas Internet sciences, propelling Baidu to be the vanguard of Chinas Internet industry. Baidu is also the largest AI platform company in China.

Li said that the previous decade was that of the Internet but the coming decade is that of intelligent economy with new modes of human-machine interaction. AI is transforming a lot of industries for higher efficiency and lower services. For instance, banks are finding it difficult to open branches but virtual assistant is used to open an account. Customers are more comfortable with virtual person than a real person.

In the education sector, every student can have a personal assistant while the pharma industry accelerate the pace of drug development with many start-ups already doing this. AI is also transforming transportation by helping reduction in traffic delays by 20-30 per cent, he said.

In China, using AI Baidu is helping in finding missing people and already 9,000 missing people have been found. AI can make one immortal. When everything about you can be digitised, computers can learn all about you, creating a digital copy of anyone, he said.

In the past ten years, people were dependent on mobile phones. But in the next ten years, people will be less dependent on the mobile phones because wherever they go there will be surrounding sensors, infrastructure that can answer your questions that concerns you. You may not be required to pull out your mobile phone every time to find an answer. This is the power of AI, he added.

Read this article:
Baidu looks to work with Indian institutions on AI - BusinessLine

Here’s what AI experts think will happen in 2020 – The Next Web

Its been another great year for robots. We didnt quite figure out how to imbue them with human-level intelligence, but we gave it the old college try and came up with GPT-2 (the text generator so scary it gives Freddy Krueger nightmares) and the AI magic responsible for these adorable robo-cheetahs:

But its time to let the past go and point our bows toward the future. Its no longer possible to estimate how much the machine learning and AI markets are worth, because the line between whats an AI-based technology and what isnt has become so blurred that Apple, Microsoft, and Google are all AI companies that also do other stuff.

Your local electricity provider uses AI and so does the person who takes those goofy real-estate agent pictures you see on park benches. Everything is AI an axiom thatll become even truer in 2020.

We solicited predictions for the AI industry over the next year from a panel of experts, heres what they had to say:

AI and human will collaborate. AI will not replace humans, it will collaborate with humans and enhance how we do things. People will be able to provide higher level work and service, powered by AI. At Intuit, our platform allows experts to connect with customers to provide tax advice and help small businesses with their books in a more accurate and efficient way, using AI. It helps work get done faster and helps customers make smarter financial decisions. As experts use the product, the product gets smarter, in turn making the experts more productive. This is the decade where, through this collaboration, AI will enhance human abilities and allow us to take our skills and work to a new level.

AI will eat the world in ways we cant imagine today: AI is often talked about as though it is a Sci-Fi concept, but it is and will continue to be all around us. We can already see how software and devices have become smarter in the past few years and AI has already been incorporated into many apps. AI enriched technology will continue to change our lives, every day, in what and how we operate. Personally, I am busy thinking about how AI will transform finances I think it will be ubiquitous. Just the same way that we cant imagine the world before the internet or mobile devices, our day-to-day will soon become different and unimaginable without AI all around us, making our lives today seem so obsolete and full of unneeded tasks.

We will see a surge of AI-first apps: As AI becomes part of every app, how we design and write apps will fundamentally change. Instead of writing apps the way we have during this decade and add AI, apps will be designed from the ground up, around AI and will be written differently. Just think of CUI and how it creates a new navigation paradigm in your app. Soon, a user will be able to ask any question from any place in the app, moving it outside of a regular flow. New tools, languages, practices and methods will also continue to emerge over the next decade.

We believe 2020 to be the year that industries that arent traditionally known to be adopters of sophisticated technologies like AI, reverse course. We expect industries like waste management, oil and gas, insurance, telecommunications and other SMBs to take on projects similar to the ones usually developed by the tech giants like Amazon, Microsoft and IBM. As the enterprise benefits of AI become more well-known, the industries outside of Silicon Valley will look to integrate these technologies.

If companies dont adapt to the current trends in AI, they could see tough times in the future. Increased productivity, operational efficiency gains, market share and revenue are some of the top line benefits that companies could either capitalize or miss out on in 2020, dependent on their implementation. We expect to see a large uptick in technology adoption and implementation from companies big and small as real-world AI applications, particularly within computer vision, become more widely available.

We dont see 2020 as another year of shiny new technology developments. We believe it will be more about the general availability of established technologies, and thats ok. Wed argue that, at times, true progress can be gauged by how widespread the availability of innovative technologies is, rather than the technologies themselves. With this in mind, we see technologies like neural networks, computer vision and 5G becoming more accessible as hardware continues to get smaller and more powerful, allowing edge deployment and unlocking new use cases for companies within these areas.

2020 is the year AI/ML capabilities will be truly operationalized, rather than companies pontificating about its abilities and potential ROI. Well see companies in the media and entertainment space deploy AI/ML to more effectively drive investment and priorities within the content supply chain and harness cloud technologies to expedite and streamline traditional services required for going to market with new offerings, whether that be original content or Direct to Consumer streaming experiences.

Leveraging AI toolsets to automate garnering insights into deep catalogs of content will increase efficiency for clients and partners, and help uphold the high-quality content that viewers demand. A greater number of studios and content creators will invest and leverage AI/ML to conform and localize premium and niche content, therefore reaching more diverse audiences in their native languages.

Im not an industry insider or a machine learning developer, but I covered more artificial intelligence stories this year than I can count. And I think 2019 showed us some disturbing trends that will continue in 2020. Amazon and Palantir are poised to sink their claws into the government surveillance business during what could potentially turn out to be President Donald Trumps final year in office. This will have significant ramifications for the AI industry.

The prospect of an Elizabeth Warren or Bernie Sanders taking office shakes the Facebooks and Microsofts of the world to their core, but companies who are already deeply invested in providing law enforcement agencies with AI systems that circumvent citizen privacy stand to lose even more. These AI companies could be inflated bubbles that pop in 2021, in the meantime theyll look to entrench with law enforcement over the next 12 months in hopes of surviving a Democrat-lead government.

Look for marketing teams to get slicker as AI-washing stops being such a big deal and AI rinsing disguising AI as something else becomes more common (ie: Ring is just a doorbell that keeps your packages safe, not an AI-powered portal for police surveillance, wink-wink).

Heres hoping your 2020 is fantastic. And, if we can venture a final prediction: stay tuned to TNW because were going to dive deeper into the world of artificial intelligence in 2020 than ever before. Its going to be a great year for humans and machines.

Read next: Samsung reveals S10 and Note 10 Lite, its new budget flagships

The rest is here:

Here's what AI experts think will happen in 2020 - The Next Web

A reality check on artificial intelligence: Can it match the hype? – PhillyVoice.com

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Dr. Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.

Experts such as Dr. Bob Kocher, a partner at the venture capital firm Venrock, are blunter. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Dr. Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval.

None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

Almost none of the [AI] stuff marketed to patients really works, said Dr. Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.

In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

The honor system is not a regulatory regime, said Dr. Jesse Ehrenfeld, who chairs the physician groups board of trustees.

In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Dr. Michael Abramoff, the companys founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Dr. Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients interests, said Dr. Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.

Kaiser Health News (KHN) is a national health policy news service. It is an editorially independent program of the Henry J. Kaiser Family Foundation which is not affiliated with Kaiser Permanente.

More:

A reality check on artificial intelligence: Can it match the hype? - PhillyVoice.com

Welcome to the roaring 2020s, the artificial intelligence decade – GreenBiz

This article first appeared in GreenBiz's weekly newsletter, VERGE Weekly, running Wednesdays. Subscribe here.

Ive long believed the most profound technology innovations are ones we take for granted on a day-to-day basis until "suddenly" they are part of our daily existence, such as computer-aided navigation or camera-endowed smartphones. The astounding complexity of whats "inside" these inventions is what makes them seem simple.

Perhaps thats why Im so fascinated by the intersection of artificial intelligence and sustainability: the applications being made possible by breakthroughs in machine learning, image recognition, analytics and sensors are profoundly practical. In many instances, the combination of these technologies completely could transform familiar systems and approaches used by the environmental and sustainability communities, making them far smarter with far less human intervention.

Take the camera trap, a pretty common technique used to study wildlife habits and biodiversity and one that has been supported by an array of big-name tech companies. Except what researcher has the time or bandwidth to analyze thousands, let alone millions, of images? Enter systems such as Wildlife Insights, a collaboration between Google Earth and seven organizations, led by Conservation International.

Wildlife Insights is, quite simply, the largest database of public camera-trap images in the world it includes 4.5 million photos that have been analyzed and mapped with AI for characteristics such as country, year, species and so forth. Scientists can use it to upload their own trap photos, visualize territories and gather insights about species health.

Heres the jaw-dropper: This AI-endowed database can analyze 3.6 million photos in an hour, compared with the 300 to 1,000 images that you or I can handle. Depending on the species, the accuracy of identification is between 80 and 98.6 percent. Plus, the system automatically discounts shots where no animals are present: no more blanks.

Expect workers in 2020 to begin seeing these effects as AI makes its way into workplaces around the world.

At the same time, we are certainly right to be cautious about the potential side effects of AI. That theme comes through loud and clear in five AI predictions published by IBM in mid-December. Two resonate with me the most: first, the idea that AI will be instrumental in building trust and ensuring that data is governed in ways that are secure and reliable; and second, that before we get too excited about all the cool things AI might be able to do, we need to make sure that it doesnt exacerbate the problem. That means spending more time focused on ways to make the data centers behind AI applications less energy-intensive and less-impactful from a materials standpoint.

From an ethical standpoint, I also have two big concerns: first, that sufficient energy is put into ensuring that the data behind the AI predictions we will come to rely on more heavily isnt flawed or biased. That means spending time to make sure a diverse set of human perspectives are represented and that the numbers are right in the first place. And second, we must view these systems as part of the overall solution, not replacements for human workers.

As IBMs vice president of AI research, Sriram Raghavan, puts it: "New research from the MIT-IBM Watson AI Lab shows that AI will increasingly help us with tasks such as scheduling, but will have a less direct impact on jobs that require skills such as design expertise and industrial strategy. Expect workers in 2020 to begin seeing these effects as AI makes its way into workplaces around the world; employers have to start adapting job roles, while employees should focus on expanding their skills."

Projections by tech market research firm IDC suggest that spending on AI systems could reach $97.9 billion in 2023 thats 2.5 times the estimated $37.5 billion spent in 2019. Why now? Its a combination of geeky factors: faster chips; better cameras; massive cloud data-processing services. Plus, did I mention that we dont really have time to waste?

Where will AI-enabled applications really make a difference for environmental and corporate sustainability? Here are five areas where I believe AI will have an especially dramatic impact over the next decade.

For more inspiration and background on the possibilities, I suggest this primer (PDF) published by the World Economic Forum. And, consider this your open invitation to alert me about the intriguing applications of AI youre seeing in your own work.

Here is the original post:

Welcome to the roaring 2020s, the artificial intelligence decade - GreenBiz

The U.S. Patent and Trademark Office Takes on Artificial Intelligence – JD Supra

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

Read more here:

The U.S. Patent and Trademark Office Takes on Artificial Intelligence - JD Supra

Illinois regulates artificial intelligence like HireVues used to analyze online job Interviews – Vox.com

Artificial intelligence is increasingly playing a role in companies hiring decisions. Algorithms help target ads about new positions, sort through resumes, and even analyze applicants facial expressions during video job interviews. But these systems are opaque, and we often have no idea how artificial intelligence-based systems are sorting, scoring, and ranking our applications.

Its not just that we dont know how these systems work. Artificial intelligence can also introduce bias and inaccuracy to the job application process, and because these algorithms largely operate in a black box, its not really possible to hold a company that uses a problematic or unfair tool accountable.

A new Illinois law one of the first of its kind in the US is supposed to provide job candidates a bit more insight into how these unregulated tools actually operate. But its unlikely the legislation will change much for applicants. Thats because it only applies to a limited type of AI, and it doesnt ask much of the companies deploying it.

Set to take effect January 1, 2020, the states Artificial Intelligence Video Interview Act has three primary requirements. First, companies must notify applicants that artificial intelligence will be used to consider applicants fitness for a position. Those companies must also explain how their AI works and what general types of characteristics it considers when evaluating candidates. In addition to requiring applicants consent to use AI, the law also includes two provisions meant to protect their privacy: It limits who can view an applicants recorded video interview to those whose expertise or technology is necessary and requires that companies delete any video that an applicant submits within a month of their request.

As Aaron Rieke, the managing director of the technology rights nonprofit Upturn, told Recode about the law, This is a pretty light touch on a small part of the hiring process. For one thing, the law only covers artificial intelligence used in videos, which constitutes a small share of the AI tools that can be used to assess job applicants. And the law doesnt guarantee that you can opt out of an AI-based review of your application and still be considered for a role (all the law says is that a company has to gain your consent before using AI; it doesnt require that hiring managers give you an alternative method).

Its hard to feel that that consent is going to be super meaningful if the alternative is that you get no shot at the job at all, said Rieke. He added that theres no guarantee that the consent and explanation the law requires will be useful; for instance, the explanation could be so broad and high-level that its not helpful.

If I were a lawyer for one of these vendors, I would say something like, Look, we use the video, including the audio language and visual content, to predict your performance for this position using tens of thousands of factors, said Rieke. If I was feeling really conservative, I might name a couple general categories of competency. (He also points out that the law doesnt define artificial intelligence, which means its difficult to tell what companies and what types of systems the law actually applies to).

Because the law is limited to AI thats used in video interviews, the company it most clearly applies to is Utah-based HireVue, a popular job interview platform that offers employers an algorithm-based analysis of recorded video interviews. Heres how it works: You answer pre-selected questions over your computer or phone camera. Then, an algorithm developed by HireVue analyzes how youve answered the questions, and sometimes even your facial expressions, to make predictions about your fit for a particular position.

HireVue says it already has about 100 clients using this artificial intelligence-based feature, including major companies like Unilever and Hilton.

Some candidates who have used HireVues system complain that the process is awkward and impersonal. But thats not the only problem. Algorithms are not inherently objective, and they reflect the data used to train them and the people that design them. That means they can inherit, and even amplify, societal biases, including racism and sexism. And even if an algorithm is explicitly instructed not to consider factors like a persons name, it can still learn proxies for protected identities (for instance, an algorithm could learn to discriminate against people who have gone to a womens college).

Facial recognition tech, in particular, has faced criticism for struggling to identify and characterize the faces of people with darker skin, women, and trans and non-binary people, among other minority groups. Critics also say that emotion (or affect) recognition technology in particular, which purports to make judgments about a persons emotions based on their facial expressions, is scientifically flawed. Thats why one research nonprofit, the AI Now Institute, called for the prohibition of such technology in high-stakes decision-making including job applicant vetting.

[W]hile youre being interviewed, theres a camera thats recording you, and its recording all of your micro facial expressions and all of the gestures youre using, the intonation of your voice, and then pattern matching those things that they can detect with their highest performers, AI Now Institute co-founder Kate Crawford told Recodes Kara Swisher earlier this year. [It] might sound like a good idea, but think about how youre basically just hiring people who look like the people you already have.

Even members of Congress are worried about that technology. In 2018, US Sens. Kamala Harris, Elizabeth Warren, and Patty Murray wrote to the Equal Employment Opportunity Commission, the federal agency charged with investigating employment discrimination, asking whether such facial analysis technology could violate anti-discrimination laws.

Despite being one of the first laws to regulate these tools, the Illinois law doesnt address concerns about bias. No federal legislation explicitly regulates these AI-based hiring systems. Instead, employment lawyers say such AI tools are generally subject to the Uniform Guidelines, employment discrimination standards created by several federal agencies back in 1978 (you can read more about that here).

The EEOC did not respond to Recodes multiple requests for comment.

Meanwhile, its not clear how, under Illinois new law, companies like HireVue will go about explaining the characteristics in applicants that its AI considers, given that the company claims that its algorithms can weigh up to tens of thousands of factors (it says it removes factors that are not predictive of job success).

The law also doesnt explain what an applicant might be entitled to if a company violates one of its provisions. Law firms advising clients on compliance have also noted that its not clear whether the law applies exclusively to businesses filling a position in Illinois, or just interviews that take place in the state. Neither Illinois State Sen. Iris Martinez nor Illinois Rep. Jaime M. Andrade, legislators who worked on the law, responded to a request for comment by the time of publication.

HireVues CEO Kevin Parker said in a blog post that the law entails very little, if any, change because its platform already complies with GDPRs principles of transparency, privacy, and the right to be forgotten. [W]e believe every job interview should be fair and objective, and that candidates should understand how theyre being evaluated. This is fair game, and its good for both candidates and companies, he wrote in August.

A spokesperson for HireVue said the decision to provide an alternative to an AI-based analysis is up to the company thats hiring, but argued that those alternatives can more time-consuming for candidates. If a candidate believes that a system is biased, the spokesperson said recourse options are the same as when a candidate believes that any part of the hiring process, or any individual interviewer, was unfairly biased against them.

Under the new law in Illinois, if you participate in a video interview that uses AI tech, you can ask for your footage to be deleted after the fact. But its worth noting that the law appears to still give the company enough time to train its model on the results of your job interview even if you think the final decision was problematic.

This gives these AI hiring companies room to continue to learn, says Rieke. Theyre going to delete the underlying video, but any learning or improvement to their systems they get to keep.

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Continued here:

Illinois regulates artificial intelligence like HireVues used to analyze online job Interviews - Vox.com

Can medical artificial intelligence live up to the hype? – Los Angeles Times

Health products powered by artificial intelligence are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, Topol said. Even the Food and Drug Administration which has approved more than 40 AI products in the last five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Some doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain.

In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Dr. Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air.

Experts such as Dr. Bob Kocher, a partner at the venture capital firm Venrock, are blunter. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech start-ups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Dr. Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval. None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and coauthor of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the last decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.

In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said.

The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

Some AI devices are more carefully tested than others. An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the test, sold as IDx-DR, right, said Dr. Michael Abramoff, the companys founder and executive chairman.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said coauthor Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Dr. Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said.

Google had no comment in response to Jhas conclusions.

This story was written for Kaiser Health News, an editorially independent publication of the Kaiser Family Foundation.

See original here:

Can medical artificial intelligence live up to the hype? - Los Angeles Times

How This Cofounder Created An Artificial Intelligence Styling Company To Help Consumers Shop – Forbes

Michelle Harrison Bacharach, the cofounder and CEO of FindMine, an AI styling company, has designed a technology, Complete the Look, that creates complete outfits around retailers products. It blends the art of styling with the ease of automation to represent a companys brand(s) at scale and help answer the question how do I wear this? The technology shows shoppers how to wear clothes with accessories. The company uses artificial intelligence to scale out the guidance that the retailer would provide. FindMine serves over 1.5 billion requests for outfits per year across e-commerce and mobile platforms, and AOV (average order value) and conversions by up to 150% with full outfits.

Michelle Bacharach, Cofounder and CEO of FINDMINE, an AI styling company.

I'm picky about user experiences, Bacharach explains. When I was a consumer in my life, shopping, I was always frustrated by the friction that it caused that I was sold a product in isolation. If I buy a scarf, what do I wear with the scarf? What are the shoes and the top and the jacket? Just answer that question for me when I buy the scarf. Why is it so hard? I started asking those questions as a consumer. Then I started looking into why retailers don't do that. It's because they have a bunch of friction on their side. They have to put together the shirt and the shoe and the pant and the bag and the jacket that go with that outfit. So, because it's manual, and they have tens of thousands of products and products come and go so frequently, it's literally impossible to keep up with. It's physically impossible for them to give an answer to every consumerMy hypothesis was that I would spend more money if they sold me all the other pieces and showed me how to use it. I started looking into [the hypothesis], and it turned out to be true; consumers spend more money when they actually understand the whole package.

Bacharach began working for a startup in Silicon Valley after graduating from college. She focused on the user experience analysis and product management, which meant she looked at customer service tickets and the analytical data around how customers were using the products. After the analysis, shed then make fixes and suggestions for new features and prioritizing those with the tech team.

She always knew she wanted to start her own company. Working at the startup provided her the opportunity to understand how all the different sectors of an organization operated. However, she had always been curious about the possibility of acting. She decided to move to Los Angeles to try to become a professional actress. I ended up deciding that the part of acting that I liked the most was auditioning and competing for the job and positioning and marketing myself, she explains. If you talk to any other actors, thats the part they hate the most. I realized that I should go to business school and focus on the entertainment industry because that's the part of it that really resonated with me.

FINDMINE is part of the SAP CX innovation ecosystem and is currently part of the latest SAP.iO ... [+] Foundry startup accelerator in San Francisco.

After graduating from business school, Bacharach entered the corporate world, where she worked on corporate strategy and product management. The company she worked for underwent a culture shift, which made it difficult working there. At that point, she had two options. She could either find another position with a different company or start her own business venture. I didn't really know what that thing was going to be, Bacharach expresses. I used that as kind of a forcing function to sit down with my list of ideas and decide, what the heck am I going to work on. I thought about it as a time off, like a six-month sabbatical to try to figure out what we're doing. Then I'm going to get invested in from my idea, and then I'm going to be back on the salary wagon and be able to make a living again. I thought it's all going to be so easy. That's what started the journey of me becoming an entrepreneur. It took two-and-a-half years before she earned a livable salary.

I worked for a startup, she states. I watched other people do it. I was a consultant to start off. I worked in corporate America. So, I saw the other side of the coin in the way that that world functions. I didn't want to do this for the long term. I like the early stages of stuff. In retrospect, I guess I did prepare myself, but I didn't know it while I was going through it. I just jumped in.

As Bacharach continues to expand FindMine with the ever-updating artificial intelligence technology, she focuses on the following essential steps to help her with each pivot:

Michelle Bacharach, Cofounder and CEO of FINDMINE, sat down with John Furrier at the Intel AI Lounge ... [+] at South by Southwest 2017 in Austin, Texas.

Don't worry about getting 100% right, Bacharach concludes. Dont look at people who are successful and say, oh, wow. They're so different from me. I can never do that. Look at them and say they're exactly the same as me. They're just two or three years ahead in terms of their learnings and findings. I have to do that same thing, but for whatever I want to start.

The rest is here:

How This Cofounder Created An Artificial Intelligence Styling Company To Help Consumers Shop - Forbes