The Trade-Off Every AI Company Will Face – Harvard Business Review

Posted: March 29, 2017 at 11:23 am

It doesnt take a tremendous amount of training to begin a job as a cashier at McDonalds. Even on their first day, most new cashiers are good enough. And they improve as they serve more customers. Although a new cashier may be slower and make more mistakes than their experienced peers, society generally accepts that they will learn from experience.

We dont often think of it, but the same is true ofcommercial airline pilots. We take comfort that airline transport pilot certification is regulated by the U.S. Department of Transportations Federal Aviation Administration and requires minimum experience of 1,500 hours of flight time, 500 hours of cross-country flight time, 100 hours of night flight time, and 75 hours of instrument operations time. Butwe also know that pilots continue to improve from on-the-job experience.

On January 15, 2009, when US Airways Flight 1549 was struck by a flock of Canada geese, shutting down all engine power, Captain Chelsey Sully Sullenberger miraculously landed his plane in the Hudson River, saving the lives of all 155 passengers. Most reporters attributed his performance to experience. He had recorded 19,663 total flight hours, including 4,765 flying an A320. Sully himself reflected: One way of looking at this might be that for 42 years, Ive been making small, regular deposits in this bank of experience, education, and training. And on January 15, the balance was sufficient so that I could make a very large withdrawal. Sully, and all his passengers, benefited from the thousands of peoplehed flown before.

How it will impact business, industry, and society.

The difference between cashiers and pilots in what constitutes good enough is based on tolerance for error. Obviously, our tolerance is much lower for pilots. This is reflected in the amount of in-house training we require them to accumulate prior to servingtheir first customers, even though they continue to learn from on-the-job experience. We have different definitions for good enough when it comes to how much training humans requirein different jobs.

The same is true ofmachines that learn.

Artificial intelligence (AI) applications are based on generating predictions. Unlike traditionally programmed computer algorithms, designed to take data and follow a specified path to produce an outcome, machine learning, the most common approach to AI these days, involves algorithms evolving through various learning processes. Amachine is given data, including outcomes, it finds associations, and then, based on those associations, it takes new data ithas never seen before and predicts an outcome.

This means that intelligent machines need to be trained, just aspilots and cashiers do. Companies design systems to train new employees until they aregood enough and then deploy them into service, knowing that they will improve as they learn from experience doing their job. While this seems obvious, determining what constitutes good enough is an important decision. In the case of machine intelligence, it can be a major strategic decision regarding timing: when to shift from in-house training to on-the-job learning.

There is no ready-made answer as to what constitutes good enough for machine intelligence. Instead, there are trade-offs. Success with machine intelligence will require taking these trade-offs seriously and approaching them strategically.

The first question firms must ask is what tolerance they and their customers have for error. We have high tolerance for error with some intelligent machines and a low tolerance for others. For example, Googles Inbox application reads your email, uses AI to predict how you will want to respond, and generates three short responses for the user to choose from. Many users report enjoying using the application even when it has a 70% failure rate (i.e., the AI-generated response is only useful 30% of the time). The reason for this high tolerance for error is that the benefit of reduced composing and typing outweighs the cost of wasted screen real estate when the predicted short response is wrong.

In contrast, we have low tolerance for error in the realm of autonomous driving. The first generation of autonomous vehicles, largely pioneered by Google, was trained using specialist human drivers who took a limited set of vehicles and drove them hundreds of thousands of kilometers. It was like a parent taking a teenager on supervised driving experiences before letting them drive on their own.

The human specialist drivers provide a safe training environment, but are also extremely limited. The machine only learns about a small number of situations. It may take many millions of miles in varying environments and situations before someone has learned how to deal with the rare incidents that are more likely to lead to accidents. For autonomous vehicles, real roads arenasty and unforgiving precisely because nasty or unforgiving human-causedsituations can occur on them.

The second question to ask, then, is how important it is to capture user data in the wild. Understanding that training might take a prohibitively long time, Tesla rolled out autonomous vehicle capabilities toall itsrecent models. These capabilities included a set of sensors that collect environmental data as well as driving data that isuploaded to Teslas machine learning servers. In a very short period of time, Tesla can obtain training data just by observing how the drivers of its cars drive. The more Tesla vehicles there are on the roads, the more Teslas machines can learn.

However, in addition to passively collecting data as humans drive their Teslas, the company needs autonomous driving data to understand how itsautonomous systems are operating. For that, it needs to have cars drive autonomously so that it can assess performance, but also assess when a human driver, required to be there and paying attention, chooses to intervene. Teslas ultimate goal is not to produce a copilot, or a teenager who drives under supervision, but a fully autonomous vehicle. That requiresgetting to the point where real people feel comfortable in a self-driving car.

Herein lies a tricky trade-off. In order to get better, Tesla needs its machines to learnin real situations. But putting its current cars in real situations means giving customers a relatively young and inexperienced driver although perhaps as good as or better than many younghuman drivers. Still, this is far riskier than beta testing, for example, whetherSiri or Alexa understoodwhat you said, or whether Google Inbox correctly predicts your response to an email. In the case of Siri, Alexa, or Google Inbox, itmeans a lower-quality user experience. In the case of autonomous vehicles, it means putting lives at risk.

As Backchannel documented in a recent article, that experience can be scary. Cars can exit freeways without notice, or put on the brakes when mistaking an underpass for an obstruction. Nervous drivers may opt not to use the autonomous features, and, in the process, may hinderTeslas ability to learn. Furthermore, even if the company can persuade some people to become beta testers, are those the people it wants? After all, a beta tester for autonomous driving may be someone with a taste for more risk than the average driver. In that case, who is the company training their machines to be like?

Machines learn faster with more data, and more data is generated when machines are deployed in the wild. However, bad things can happen in the wild and harm the company brand. Putting products in the wild earlier accelerates learning but risks harming the brand (and perhaps the customer!); putting products in the wild later slows learning but allows for more time to improve the product in-house and protect the brand (and, again, perhaps the customer).

For some products, like Google Inbox, the answer to the trade-off seems clear because the cost of poor performance is low and the benefits from learning from customer usage are high. It makes sense to deploy this type of product in the wild early. For other products, like cars, the answer is less clear.As more companies seek to take advantage of machine learning, this is a trade-off more and more will have to make.

Read the original:

The Trade-Off Every AI Company Will Face - Harvard Business Review

Related Posts