To Get Consumers to Trust AI, Show Them Its Benefits – Harvard Business Review

Posted: April 17, 2017 at 12:53 pm

Executive Summary

Artificial intelligence (AI) is increasingly emerging in applications like autonomous vehicles and medical assistance devices, but consumers dont necessarily trust these applications. Research shows that operational safety and data security are decisive factors in getting people to trust new AI technology. Even more important is the balance between control and autonomy in the technology. And communication is key it should be proactive and open in the early stages of introducing the public to the technology. Consumers who can effectively communicate the benefits of an AI application have a reduction in their perceived risk, which results in greater trust, and ultimately, in greater adoption of the technology.

Artificial intelligence (AI) is emerging in applications like autonomous vehicles and medical assistance devices. But even when the technology is ready to use and has been shown to meet customer demands, theres still a great deal of skepticism among consumers. For example, a survey of more than 1,000 car buyers in Germany showed that only 5% would prefer a fully autonomous vehicle. We can find a similar number of skeptics of AI-enabled medical diagnosis systems, such as IBMs Watson. The publics lack of trust in AI applications may cause us to collectively neglect the possible advantages we could gain from them.

In order to understand trust in the relationship between humans and automation, we have to explore trust in two dimensions: trust in the technology and trust in the innovating firm.

How it will impact business, industry, and society.

In human interactions, trust is the willingness to be vulnerable to the actions of another person. But trust is an evolving and fragile phenomenon that can be destroyed even faster than it can be created. Trust is essential to reducing perceived risk, which is a combination of uncertainty and the seriousness of the potential outcome involved. Perceived risk in the context of AI stems from giving up control to a machine. Trust in automation can only evolve from predictability, dependability, and faith.

Three factors will be crucial to gaining this trust: 1.) performance that is, the application performs as expected; 2.) process that is, we have an understanding of the underlying logic of the technology, and 3.) purpose that is, we have faith in the designs intentions. Additionally, trust in the company designing the AI, and the way the way the firm communicates with customers, will influence whether the technology is adopted by customers. Too many high-tech companies wrongly assume that the quality of the technology alone will influence people to use it.

In order to understand how firms have systematically enhanced trust in applied AI, my colleagues Monika Hengstler and Selina Duelli and I conducted nine case studies in the transportation and medical device industries. By comparing BMWs semi-autonomous and fully autonomous cars, Daimlers Future Truck project, ZF Friedrichshafens driving assistance system, as well as Deutsche Bahns semi-autonomous and fully autonomous trains and VAG Nrnbergs fully automated underground train, we gained a deeper understanding of how those companies foster trust in their AI applications. We also analyzed four cases in the medical technology industry, including IBMs Watson as an AI-empowered diagnosis system, HPs data analytics system for automated fraud detection in the healthcare sector, AiCures medical adherence app that reminds patients to take their medication, and the Care-O-bot 3 of Frauenhofer IPA, a research platform for upcoming commercial service robot solutions. Our semi-structured interviews, follow-ups, and archival data analysis was guided by a theoretical discussion on how trust in the technology and in the innovating firm and its communication is facilitated.

Based on this cross-case analysis, we found that operational safety and data security are decisive factors in getting people to trust technology. Since AI-empowered technology is based on the delegation of control, it will not be trusted if it is flawed. And since negative events are more visible than positive events, operational safety alone is not sufficient for building trust. Additionally, cognitive compatibility, trialability, and usability are needed:

Cognitive compatibility describes what people feel or think about an innovation as it pertains to their values. Users tend to trust automation if the algorithms are understandable and guide them toward achieving their goals. This understandability of algorithms and the motives in AI applications directly affect the perceived predictability of the system, which, in turn, is one of the foundations of trust.

Trialability points to the fact that people who were able to visualize the concrete benefits of a new technology via a trial run reduced their perceived risk and therefore their resistance to the technology.

Usability is influenced by both the intuitiveness of the technology, and the perceived ease of use. An intuitive interface can reduce initial resistance and make the technology more accessible, particularly for less tech-savvy people. Usability testing with the target user group is an important first step toward creating this ease of use.

But even more important is the balance between control and autonomy in the technology. For efficient collaboration between humans and machines, the appropriate level of automation must be carefully defined. This is even more important in intelligent applications that are designed to change human behaviors (such as medical devices that incentivize humans to take their medications on time). The interaction should not make people feel like theyre being monitored, but rather, assisted. Appropriate incentives are important to keep people engaged with an application, ultimately motivating them to use it as intended. Our cases showed that technologies with high visibility e.g., autonomous cars in the transportation industry, or AiCure and Care-O-bot in the healthcare industry require more intensive efforts to foster trust in all three trust dimensions.

Our results also showed that stakeholder alignment, transparency about the development process, and gradual introduction of the technology are crucial strategies for fostering trust. Introducing innovations in a stepwise fashion can lead to more gradual social learning, which in turn builds trust. Accordingly, the established firms in our sample tended to pursue a more gradual introduction of their AI applications to allow for social learning, while younger companies such as AiCure tended to choose a more revolutionary introduction approach in order to position themselves as a technology leader. The latter approach has a high risk of rejection and the potential to cause a scandal if the underlying algorithms turn out to be flawed.

If youre trying to get consumers to trust a new AI-enabled application, communication should be proactive and open in the early stages of introducing the public to the technology, as it will influence the companys perceived credibility and trustworthiness, which will influence attitude formation. In the cases we studied, users who could effectively communicate the benefits of an AI application had a reduction in their perceived risk, which resulted in greater trust, and a higher likelihood to adopt the new technology.

Read the original here:

To Get Consumers to Trust AI, Show Them Its Benefits - Harvard Business Review

Related Posts