Companies come in promising the world and often dont deliver, said Bob Wachter, head of the department of medicine at the University of California, San Francisco. When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.
Administrators say algorithms the software that processes data from outside companies dont always work as advertised because each health system has its own technological framework. So hospitals are building out engineering teams and developing artificial intelligence and other technology tailored to their own needs.
But its slow going. Research based on job postings shows health care behind every industry except construction in adopting AI.
The Food and Drug Administration has taken steps to develop a model for evaluating AI, but it is still in its early days. There are questions about how regulators can monitor algorithms as they evolve and rein in the technologys detrimental aspects, such as bias that threaten to exacerbate health care inequities.
Sometimes theres an assumption that AI is working, and its just a matter of adopting it, which is not necessarily true, said Florenta Teodoridis, a professor at the University of Southern Californias business school whose research focuses on AI. She added that being unable to understand why an algorithm came to a certain result is fine for things like predicting the weather. But in health care, its impact is potentially life-changing.
Despite the obstacles, the tech industry is still enthusiastic about AIs potential to transform health care.
The transition is slightly slower than I hoped but well on track for AI to be better than most radiologists at interpreting many different types of medical images by 2026, Hinton told POLITICO via email. He said he never suggested that we should get rid of radiologists, but that we should let AI read scans for them.
If hes right, artificial intelligence will start taking on more of the rote tasks in medicine, giving doctors more time to spend with patients to reach the right diagnosis or develop a comprehensive treatment plan.
I see us moving as a medical community to a better understanding of what it can and cannot do, said Lara Jehi, chief research information officer for the Cleveland Clinic. It is not going to replace radiologists, and it shouldnt replace radiologists.
Radiology is one of the most promising use cases for AI. The Mayo Clinic has a clinical trial evaluating an algorithm that aims to reduce the hours-long process oncologists and physicists undertake to map out a surgical plan for removing complicated head and neck tumors.
An algorithm can do the job in an hour, said John D. Halamka, president of Mayo Clinic Platform: Weve taken 80 percent of the human effort out of it. The technology gives doctors a blueprint they can review and tweak without having to do the basic physics themselves, he said.
NYU Langone Health has also experimented with using AI in radiology. The health system has collaborated with Facebooks Artificial Intelligence Research group to reduce the time it takes to get an MRI from one hour to 15 minutes. Daniel Sodickson, a radiological imaging expert at NYU Langone who worked on the research, sees opportunity in AIs ability to downsize the amount of data doctors need to review.
When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.
Bob Wachter, head of the department of medicine at the University of California, San Francisco
Covid has accelerated AIs development. Throughout the pandemic, health providers and researchers shared data on the disease and anonymized patient data to crowdsource treatments.
Microsoft and Adaptive Biotechnologies, which partner on machine learning to better understand the immune system, put their technology to work on patient data to see how the virus affected the immune system.
The amount of knowledge thats been obtained and the amount of progress has just been really exciting, said Peter Lee, corporate vice president of research and incubations at Microsoft.
There are other success stories. For example, Ochsner Health in Louisiana built an AI model for detecting early signs of sepsis, a life-threatening response to infection. To convince nurses to adopt it, the health system created a response team to monitor the technology for alerts and take action when needed.
Im calling it our care traffic control, said Denise Basow, chief digital officer at Ochsner Health. Since implementation, she said, death from sepsis is declining.
The biggest barrier to the use of artificial intelligence in health care has to do with infrastructure.
Health systems need to enable algorithms to access patient data. Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence. But thats not as easy for smaller players.
Another problem is that every health system is unique in its technology and the way it treats patients. That means an algorithm may not work as well everywhere.
Over the last year, an independent study on a widely used sepsis detection algorithm from EHR giant Epic showed poor results in real-world settings, suggesting where and how hospitals used the AI mattered.
This quandary has led top health systems to build out their own engineering teams and develop AI in-house.
That could create complications down the road. Unless health systems sell their technology, its unlikely to undergo the type of vetting that commercial software would. That could allow flaws to go unfixed for longer than they might otherwise. Its not just that the health systems are implementing AI while no ones looking. Its also that the stakeholders in artificial intelligence, in health care, technology and government, havent agreed upon standards.
A lack of quality data which gives algorithms material to work with is another significant barrier in rolling out the technology in health care settings.
Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence.|Elaine Thompson/AP Photo
Much data comes from electronic health records but is often siloed among health care systems, making it more difficult to gather sizable data sets. For example, a hospital may have complete data on one visit, but the rest of a patients medical history is kept elsewhere, making it harder to draw inferences about how to proceed in caring for the patient.
We have pieces and parts, but not the whole, said Aneesh Chopra, who served as the governments chief technology officer under former President Barack Obama and is now president of data company CareJourney.
While some health systems have invested in pulling data from a variety of sources into a single repository, not all hospitals have the resources to do that.
Health care also has strong privacy protections that limit the amount and type of data tech companies can collect, leaving the sector behind others in terms of algorithmic horsepower.
Importantly, not enough strong data on health outcomes is available, making it more difficult for providers to use AI to improve how they treat patients.
That may be changing. A recent series of studies on a sepsis algorithm included copious details on how to use the technology in practice and documented physician adoption rates. Experts have hailed the studies as a good template for how future AI studies should be conducted.
But working with health care data is also more difficult than in other sectors because it is highly individualized.
We found that even internally across our different locations and sites, these models dont have a uniform performance, said Jehi of the Cleveland Clinic.
And the stakes are high if things go wrong. The number of paths that patients can take are very different than the number of paths that I can take when Im on Amazon trying to order a product, Wachter said.
Health experts also worry that algorithms could amplify bias and health care disparities.
For example, a 2019 study found that a hospital algorithm more often pushed white patients toward programs aiming to provide better care than Black patients, even while controlling for the level of sickness.
Last year, the FDA published a set of guidelines for using AI as a medical device, calling for the establishment of good machine learning practices, oversight of how algorithms behave in real-world scenarios and development of research methods for rooting out bias.
The agency subsequently published more specific guidelines on machine learning in radiological devices, requiring companies to outline how the technology is supposed to perform and provide evidence that it works as intended. The FDA has cleared more than 300 AI-enabled devices, largely in radiology, since 1997.
Regulating algorithms is a challenge, particularly given how quickly the technology advances. The FDA is attempting to head that off by requiring companies to institute real-time monitoring and submit plans on future changes.
But in-house AI isnt subject to FDA oversight. Bakul Patel, former head of the FDAs Center for Devices and Radiological Health and now Googles senior director for global digital health strategy and regulatory affairs, said that the FDA is thinking about how it might regulate noncommercial artificial intelligence inside of health systems, but he adds, theres no easy answer.
FDA has to thread the needle between taking enough action to mitigate flaws in algorithms while also not stifling AIs potential, he said.
Some argue that public-private standards for AI would help advance the technology. Groups, including the Coalition for Health AI, whose members include major health systems and universities as well as Google and Microsoft, are working on this approach.
But the standards they envision would be voluntary, which could blunt their impact if not widely adopted.
Link:
Artificial intelligence was supposed to transform health care. It hasn't. - POLITICO