Artificial intelligence and machine learning promises to revolutionize healthcare.
Proponents say it will help diagnose ailments more quickly and more accurately, as well as help monitor peoples health and take over a swath of doctors paperwork so they can see more patients.
At least, thats the promise.
Theres been an exponential increase in approvals from the Food and Drug Administration (FDA) for these type of health products as well as projections that artificial intelligence (AI) will become an $8 billion industry by 2022.
However, many experts are urging to pump the brakes on the AI craze.
[AI] has the potential to democratize healthcare in ways we can only dream of by allowing equal care for all. However, it is still in its infancy and it needs to mature, Jos Morey, MD, a physician, AI expert, and former associate chief health officer for IBM Watson, told Healthline.
Consumers should be wary of rushing to a new facility simply because they may be providing a new AI tool, especially if it is for diagnostics, he said. There are really just a handful of physicians across the world that are practicing that understand the strengths and benefits of what is currently available.
But what exactly is artificial intelligence in medical context?
It starts with machine learning, which are algorithms that enable a computer program to learn by incorporating increasing large and dynamic amounts of data, according to Wired magazine.
The terms machine learning and AI are often used interchangeably.
To understand machine learning, imagine a given set of data say a set of X-rays that do or do not show a broken bone and having a program try to guess which ones show breaks.
The program will likely get most of the diagnoses wrong at first, but then you give it the correct answers and the machine learns from its mistakes and starts to improve its accuracy.
Rinse and repeat this process hundreds or thousands (or millions) of times and, theoretically, the machine will be able to accurately model, select, or predict for a given goal.
So its easy to see how in healthcare a field that deals with massive amounts of patient data machine learning could be a powerful tool.
One of the key areas where AI is showing promise is in diagnostic analysis, where the AI system will collect and analyze data sets on symptoms to diagnose the potential issue and offer treatment solutions, John Bailey, director of sales for the healthcare technology company Chetu Inc., told Healthline.
This type of functionality can further assist doctors in determining the illness or condition and allow for better, more responsive care, he said. Since AIs key benefit is in pattern detection, it can also be leveraged in identifying, and assist in containing, illness outbreaks and antibiotic resistance.
That all sounds great. So whats the hitch?
The problem lies in lack of reproducibility in real-world settings, Morey said. If you dont test on large robust datasets that are being just one facility or one machine, then you potentially develop bias into the algorithm that will ultimately only work in one very specific setting but wont be compatible for large scale roll-out.
He added, The lack of reproducibility is something that affects a lot of science but AI in healthcare in particular.
For instance, a study in the journal Science found that even when AI is tested in a clinical setting, its often only tested in a single hospital and risks failing when moved to another clinic.
Then theres the issue of the data itself.
Machine learning is only as good as the data sets the machines are working with, said Ray Walsh, a digital privacy expert at ProPrivacy.
A lack of diversity in the datasets used to train up medical AI could lead to algorithms unfairly discriminating against under-represented demographics, Walsh told Healthline.
This can create AI that is prejudiced against certain people, he continued. As a result, AI could lead to prejudice against particular demographics based on things like high body mass index (BMI), race, ethnicity, or gender.
Meanwhile, the FDA has fast-tracked approval of AI-driven products, from approving just 1 in 2014 to 23 in 2018.
Many of these products havent been subjected to clinical trials since they utilize the FDAs 510(k) approval path, which allows companies to market products without clinical trials as long as they are at least as safe and effective, that is, substantially equivalent, to a legally marketed device.
This process has made many in the AI health industry happy. This includes Elad Walach the co-founder and chief executive officer of Aidoc, a startup focused on eliminating bottlenecks in medical image diagnosis.
The FDA 510(k) process has been very effective, Walach told Healthline. The key steps include clinical trials applicable to the product and a robust submission process with various types of documentation addressing the key aspects of the claim and potential risks.
The challenge the FDA is facing is dealing with the increasing pace of innovation coming from AI vendors, he added. Having said that, in the past year they progressed significantly on this topic and created new processes to deal with the increase in AI submissions.
But not everyone is convinced.
The FDA has a deeply flawed approval process for existing types of medical devices and the introduction of additional technological complexity further exposes those regulatory inadequacies. In some instances, it might also raise the level of risk, said David Pring-Mill, a consultant to tech startups and opinion columnist at TechHQ.
New AI products have a dynamic relationship with data. To borrow a medical term, they arent quarantined. The idea is that they are always learning, but perhaps its worth challenging the assumption that a change in outputs always represents an improved product, he said.
The fundamental problem, Pring-Mill told Healthline, is that the 510(k) pathway allows medical device manufacturers to leapfrog ahead without really proving the merits of their products.
One way or another, machine learning and AI integration into the medical field is here to stay.
Therefore, the implementation will be key.
Even if AI takes on the data processing role, physicians may get no relief. Well be swamped with input from these systems, queried incessantly for additional input to rule in or out possible diagnoses, and presented with varying degrees of pertinent information, Christopher Maiona, MD, SFHM, the chief medical officer at PatientKeeper Inc., which specializes in optimizing electronic health records, told Healthline.
Amidst such a barrage, the systems user interface will be critical in determining how information is prioritized and presented so as to make it clinically meaningful and practical to the physician, he added.
And AIs success in medicine both now and in the future may ultimately still rely on the experience and intuition of human beings.
A computer program cannot detect the subtle nuances that comes with years of caring for patients as a human, David Gregg, MD, chief medical officer for StayWell, a healthcare innovation company, told Healthline.
Providers can detect certain cues, connect information and tone and inflection when interacting with patients that allow them to create a relationship and provide more personalized care, he said. AI simply delivers a response to data, but cannot address the emotional aspects or react to the unknown.
See the original post here:
Don't Put Your Health in the Hands of Artificial Intelligence Just Yet - Healthline