6 tactics to make artificial intelligence work on the frontlines – STAT

Posted: September 15, 2022 at 10:06 pm

Artificial intelligence is a transformative tool in the workplace except when it isnt.

For top managers, state-of-the art AI tools are a no-brainer: in theory, they increase revenues, decrease costs, and improve the quality of products and services. But in the wild, its often just the opposite for frontline employees who actually need to integrate these tools into their daily work. Not only can AI tools yield few benefits, but they can also introduce additional work and decrease autonomy.

Our research on the introduction of 15 AI clinical decision support tools over the past five years at Duke Health has shown that the key to successfully integrating them is recognizing that increasing the value for frontline employees is as important as making sure the tools work in the first place. The tactics we identified are useful not only in biopharma, medicine, and health care, but across a range of other industries as well.

advertisement

Here are six tactics for making artificial intelligence-based tools work on industry frontlines.

AI project leaders need to increase benefits for the frontline employees who will be the actual end users of a new tool, though this is often not the group that initially approaches them to build it.

advertisement

Cardiologists in Dukes intensive care unit asked AI project team leaders to build a tool to identify heart attack patients who did not need ICU care. Cardiologists said the tool would allow frontline emergency physicians to more easily identify these patients and triage them to noncritical care, increasing the quality of care, lowering costs, and preventing unnecessary overcrowding in the ICU.

The team developed a highly accurate tool that helped ER doctors identify low-risk patients. But within weeks of launching the tool, it was scrapped. Frontline emergency physicians complained that they didnt need a tool to tell us how to do our job. Incorporating the tool meant extra work and they resented the outsider intrusion.

The artificial intelligence team had been so focused on the needs of the group that initially approached them cardiologists that they neglected those who would actually use the tool emergency physicians.

The next time cardiologists approached the developers, the latter were savvier. This time, the cardiologists wanted an AI tool to help identify patients with low-risk pulmonary embolism (one or more blood clots in the lungs), so they could be sent home instead of hospitalized. The developers immediately reached out to emergency physicians, who would ultimately use the tool, to understand their pain points around the treatment of patients with pulmonary embolism. The developers learned that emergency physicians would use the tool only if they could be sure that patients would get the appropriate follow-up care. Cardiologists agreed to staff a special outpatient clinic for these patients.

This time, the emergency doctors accepted the tool, and it was successfully integrated into the emergency department workflow.

The key lesson here is that project leaders need to identify the frontline employees who will be the true end users of a new tool based on artificial intelligence. Otherwise, they will resist adopting it. When employees are included in the development process, they will make the tool more useful in daily work.

Successful AI project team leaders measure and reward frontline employees for accomplishing the outcomes the tool is designed to improve.

In the pulmonary embolism project described earlier, project leaders learned that emergency physicians might not use the tool because they were evaluated on how well they recognized and handled acute, common issues rather than how well they recognized and handled uncommon issues like low-risk pulmonary embolism. So the leaders worked with hospital management to change the reward system in a way that emergency physicians are now also evaluated based on how successfully they recognize and triage low-risk pulmonary embolism patients.

It may seem obvious that it is necessary to reward employees for accomplishing the outcomes a tool is designed to improve. But this is easier said than done, because AI project team leaders usually dont control compensation decisions for these employees. Project leaders need to gain top managers support to help change incentives for end users.

Data used to train a tool based on artificial intelligence must be representative of the target population in which it will be used. This requires a lot of training data, and identifying and cleaning data during AI tool design requires a lot of data work. AI project team leaders need to reduce the amount of this work that falls on frontline employees.

For example, kidney specialists asked the Duke AI team for a tool to increase early detection of people at high risk of chronic kidney disease. It would help frontline primary care physicians both detect patients who needed to be referred to nephrologists, and reduce the number of low-risk patients who were needlessly referred to nephrologists.

To build the tool, developers initially wanted to engage primary care practitioners in time-consuming work to spot and resolve data discrepancies between different data sources. But because it was the nephrologists, not the primary care practitioners, who would primarily benefit from the tool, PCPs were not enthusiastic to take on additional work to build a tool they didnt ask for. So the developers enlisted nephrologists rather than PCPs to do the work on data label generation, data curation, and data quality assurance.

Reducing data work for frontline employees makes perfect sense, so why do some AI project leaders fail to do it? Because these employees know data idiosyncrasies and the best outcome measures. The solution is to involve them, but use their labor judiciously.

Developing AI tools requires frontline employees to engage in integration work to incorporate the tool into their daily workflows. Developers can increase implementation by reducing this integration work.

Developers working on the kidney disease tool avoided requesting information they could retrieve automatically. They also made the tool easier to use by color coding high-risk patients in red, and medium-risk patients in yellow.

With integration work, AI developers often want to involve frontline employees for two reasons: because they know best how a new tool will fit into workflows and because those who are involved in development are more likely to help persuade their peers to use the tool. Instead of avoiding enlisting frontline employees altogether, developers need to assess which aspects of AI tool development will benefit most from their labor.

Most jobs include valued tasks as well as necessary scut work. One important tactic for AI developers is not infringing on the work that frontline employees value.

What emergency physicians value is diagnosing problems and efficiently triaging patients. So when Dukes artificial intelligence team began developing a tool to better detect and manage the potentially deadly bloodstream infection known as sepsis, they tried to configure it to avoid infringing on emergency physicians valued tasks. They built it instead to help with what these doctors valued less: blood test analysis, medication administration, and physical exam assessments.

AI project team leaders often fail to protect the core work of frontline employees because intervening around these important tasks often promises to yield greater gains. Smart AI leaders have discovered, however, that employees are much more likely to use the technology that helps them with their scut work rather than one that infringes on the work they love to do.

Introducing a new AI decision support tool can threaten to curtail employee autonomy. For example, because the AI sepsis tool flagged patients at high risk of this condition, it threatened clinicians autonomy around diagnosing patients. So the project team invited key frontline workers to choose the best ways to test the tools effectiveness.

AI project team leaders often fail to include frontline employees in the evaluation process because they can make it harder in the short term. When frontline employees are asked to select what will be tested, they often select the most challenging options. We have found, however, that developers cannot bypass this phase, because employees will balk at using the tools if they dont have confidence in them.

Behind the bold promise of AI lies a stark reality: AI solutions often make employees lives harder. Managers need to increase value for those working on the front lines to allow AI to function in the real world.

Katherine C. Kellogg is a professor of management and innovation and head of the Work and Organization Studies department at the MIT Sloan School of Management. Mark P. Sendak is the population health and data science lead at the Duke Institute for Health Innovation. Suresh Balu is the associate dean for innovation and partnership for the Duke University School of Medicine and director of the Duke Institute for Health Innovation.

Visit link:

6 tactics to make artificial intelligence work on the frontlines - STAT

Related Posts