The U.S. Food and Drug Administration on Thursday convened a public meeting of its Patient Engagement Advisory Committee to discuss issues regarding artificial intelligence and machine learning in medical devices.
"Devices using AI and ML technology will transform healthcare delivery by increasing efficiency in key processes in the treatment of patients," said Dr. Paul Conway, PEAC chair and chair of policy and global affairs of the American Association of Kidney Patients.
As Conway and others noted during the panel, AI and ML systems may have algorithmic biases and lack transparency potentially leading, in turn, to an undermining of patient trust in devices.
Medical device innovation has already ramped up in response to the COVID-19 crisis, with Center for Devices and Radiological Health Director Dr. Jeff Shuren noting that 562 medical devices have already been granted emergency use authorization by the FDA.
It's imperative, said Shuren, that patients' needs be considered as part of the creation process.
"We continue to encourage all members of the healthcare ecosystem to strive to understand patients' perspective and proactively incorporate them into medical device development, modification and evaluation," said Shuren. "Patients are truly the inspiration for all the work we do."
"Despite the global challenges with the COVID-19 public health emergency ... the patient's voice won't be stopped," Shuren added. "And if anything, there is even more reason for it to be heard."
However, said Pat Baird, regulatory head of global software standards at Philips, facilitating patient trust also means acknowledging the importance of robust and accurate data sets.
"To help support ourpatients, we need to become more familiar with them, their medical conditions, their environment, and their needs and wantsto be able to better understand the potentially confounding factors that drive some of the trends in the collected data," said Baird.
"An algorithm trained on one subset of the population might not be relevant for a different subset," Baird explained.
For instance, if a hospital needed a device that would serve its population of seniors at a Florida retirement community, an algorithm trained on recognizing healthcare needs of teens in Maine would not be effective.Not every population will have the same needs.
"This bias in the data is not intentional, but can be hard to identify," he continued. He encouraged the development of a taxonomy of bias types that would be made publicly available.
Ultimately, he said, people won't use what they don't trust. "We need to use our collective intelligence to help produce better artificial intelligence populations," he said.
Captain Terri Cornelison, chief medical officer and director for thehealth of women at CDRH, noted that demographic identifiers can be medically significant due to genetics and social determinants of health, among other factors.
"Science is showing us that these are not just categorical identifiers but actually clinically relevant," Cornelison said.
She pointed out that a clinical study that does not identify patients' sex may mask different results for people with different chromosomes.
"In many instances, AI and ML devices may be learning a worldview that is narrow in focus, particularly in the available training data, if the available training data do not represent a diverse set of patients," she said.
"More simply, AI and ML algorithms may not represent you if the data do not include you," she said.
"Advances in artificial intelligence are transforming our health systems and daily lives," Cornelison continued. "Yet despite these significant achievements, most ignore the sex, gender, age, race [and] ethnicity dimensions and their contributions to health and disease differences among individuals."
The committee also examined how informed consent might play a role in algorithmic training.
"If I give my consent to be treated by an AI/ML device, I have the right to know whether there were patients like me ... in the data set," said Bennet Dunlap, a health communications consultant. "I think the FDA should not be accepting or approving a medical device that does not have patient engagement" of the kind outlined in committee meetings, he continued.
"You need to know what your data is going to be used for," he reiterated. "I have white privilege. I can just assume old white guys are in [the data sets]. That's where everybody starts. But that should not be the case."
Dr. Monica Parker, assistant professor in neurology and education core member of the Goizueta Alzheimers Disease Research Center at Emory University, pointed out that diversifying patient data requires turning to trusted entities within communities.
"If people are developing these devices, in the interest of being more broadly diverse, is there some question about where these things were tested?" She raised the issue of testing taking place in academic medical centers or technology centers on the East or West Coast, versus "real-world data collection from hospitals that may be using some variation of the device for disease process.
"Clinicians who are serving the population for which the device is needed" provide accountability and give the device developer a better sense of whothey're treating, Parker said. She also reminded fellow committee members that members of different demographic groups are not uniform.
Philip Rutherford, director of operation at Faces and Voices Recovery, pointed out that it's not just enough to prioritize diversity in data sets.The people in charge of training the algorithm must also not be homogenous.
"If we want diversity in our data, we have to seek diversity in the people that are collecting the data," said Rutherford.
The committee called on the FDA to take a strong role in addressing algorithmic bias in artificial intelligence and machine learning.
"At the end of the day, diversity validation and unconscious bias all these things can be addressed if there's strong leadership from the start," said Conway.
Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.
See the rest here:
FDA highlights the need to address bias in AI - Healthcare IT News