The potential and the pitfalls of medical AI – The Economist

A pioneering ophthalmologist highlights plenty of both

Jun 13th 2020

THE BOOKS strewn around Pearse Keanes office at Moorfields Eye Hospital in London are an unusual selection for a medic. The Information, a 500-page doorstop by James Gleick on the mathematical roots of computer science, sits next to Neal Stephensons even heftier Cryptonomicon, an alt-history novel full of cryptography and prime numbers. Nearby is The Player of Games by the late Iain M. Banks, whose sci-fi novels describe a utopian civilisation in which AI has abolished work.

Dr Keane is an ophthalmologist by training. But if I could have taken a year or two from my medical training to do a computer-science degree, I would have, he says. These days he is closer to the subject than any university student. In 2016 he began a collaboration with DeepMind, an AI firm owned by Google, to apply AI to ophthalmology.

In Britain the number of ophthalmologists is not keeping up with the falling cost of eye scans (about 20, or $25, from high-street opticians) and growing demand from an ageing population. In theory, computers can help. In 2018 Moorfields and DeepMind published a paper describing an AI that, given a retina scan, could make correct referral decisions 94% of the time, matching human experts. A more recent paper described a system that can predict the onset of age-related macular degeneration, a progressive disease that causes blindness, up to six months in advance.

But Dr Keane cautions that in practice, moving from a lab demonstration to a real system takes time: the technology is not yet being used on real patients. His work highlights three thorny problems that must be overcome if AI is to be rolled out more quickly, in medicine and elsewhere.

The first is about getting data into a coherent, usable format. We often hear from medics saying they have a big dataset on one disease or another, says Dr Keane. But when you ask basic questions about what format the data is in, we never hear from them again.

Then there are the challenges of privacy and regulation. Laws guarding medical records tend to be fierce, and regulators are still wrestling with the question of how exactly to subject AI systems to clinical trials.

Finally there is the question of explainability. Because AI systems learn from examples rather than following explicit rules, working out why they reach particular conclusions can be tricky. Researchers call this the black box problem. As AI spreads into areas such as medicine and law, solving it is becoming increasingly important.

One approach is to highlight which features in the models input most strongly affect its output. Another is to boil models down into simplified flow-charts, or let users question them (would moving this blob change the diagnosis?). To further complicate matters, notes Dr Keane, techies building a system may prefer one kind of explainability for testing purposes, while medics using it might want something closer to clinical reasoning. Solving this problem, he says, will be important both to mollify regulators and to give doctors confidence in the machines opinions.

But even when it is widely deployed, AI will remain a backroom tool, not a drop-in replacement for human medics, he predicts: I cant foresee a scenario in which a pop-up on your iPhone tells you youve got cancer. There is more to being a doctor than accurate diagnosis.

This article appeared in the Technology Quarterly section of the print edition under the headline "An AI for an eye"

Visit link:
The potential and the pitfalls of medical AI - The Economist

Related Posts
This entry was posted in $1$s. Bookmark the permalink.