Why The Military And Corporate America Want To Make AI Explain Itself – Fast Company

Posted: June 23, 2017 at 6:15 am

Modern artificial intelligence is smart enough to beat humans at chess, understand speech, and even drive a car.

But one area where machine-learning algorithms still struggle is explaining to humans how and why theyre making particular decisions. That can be fine if computers are just playing games, but for more serious applications people are a lot less willing to trust a machine whose thought processes they cant understand.

If AI is being used to make decisions about who to hire or whether to extend a bank loan, people want to make sure the algorithm hasnt absorbed race or gender biases from the society that trained it. If a computer is going to drive a car, engineers will want to make sure it doesnt have any blind spots that will send it careening off the road in unexpected situations. And if a machine is going to help make medical diagnoses, doctors and patients will want to know what symptoms and readings its relying on.

If you go to a doctor and the doctor says, Hey, you have six months to live, and offers absolutely no explanation as to why the doctor is saying that, that would be a pretty poor doctor, says Sameer Singh, an assistant professor of computer science at the University of California at Irvine.

Singh is a coauthor of a frequently cited paper published last year that proposes a system for making machine-learning decisions more comprehensible to humans. The system, known as LIME, highlights parts of input data that factor heavily in the computers decisions. In one example from the paper, an algorithm trained to distinguish forum posts about Christianity from those about atheism appears accurate at first blush, but LIME reveals that its relying heavily on forum-specific features, like the names of prolific posters.

Developing explainable AI, as such systems are frequently called, is more than an academic exercise. Its of growing interest to commercial users of AI and to the military. Explanations of how algorithms are thinking make it easier for leaders to adopt artificial intelligence systems within their organizationsand easier to challenge them when theyre wrong.

If they disagree with that decision, they will be way more confident in going back to the people who wrote that and say no, this doesnt make sense because of this, says Mark Hammond, cofounder and CEO of AI startup Bonsai.

Last month, the Defense Advanced Research Projects Agency signed formal agreements with 10 research teams in a four-year, multimillion-dollar program designed to develop new explainable AI systems and interfaces for delivering the explanations to humans. Some of the teams will work on systems for operating simulated autonomous devices, like self-driving cars, while others will work on algorithms for analyzing mounds of data, like intelligence reports.

Each year, well have a major evaluation where well bring in groups of users who will sit down with these systems, says David Gunning, program manager in DARPAs Information Innovation Office. Gunning says he imagines that by the end of the program, some of the prototype projects will be ready for further development for military or other use.

Deep learning, loosely inspired by the networks of neurons in the human brain, uses sample data to develop multilayered sets of huge matrices of numbers. Algorithms then harness those matrices to analyze and categorize data, whether theyre looking for familiar faces in a crowd or trying to spot the best move on a chess board. Typically, they process information starting at the lowest level, whether thats the individual pixels from an image or individual letters of text. The matrices are used to decide how to weight each facet of that data through a series of complex mathematical formulas. While the algorithms often prove quite accurate, the large arrays of seemingly arbitrary numbers are effectively beyond human comprehension.

The whole process is not transparent, says Xia Ben Hu, an assistant professor of computer science and engineering at Texas A&M and leader of one of the teams in the DARPA program. His groupaims to produce what it calls shallow models: mathematical constructs that behave, at least in certain cases, similarly to deep-learning algorithms while being simple enough for humans to understand.

Another team, from the Stanford University spin-off research group SRI International, plans to use what are called generative adversarial networks. Those are pairs of AI systems in which one is trained to produce realistic data in a particular category and the other is trained to distinguish the generated data from authentic samples. The purpose, in this case, is to generate explanations similar to those that might be given by humans.

The team plans to test its approach on a data set called MovieQA, which consists of 15,000 multiple choice questions about movies along with data like their scripts and subtitled video clips.

You have the movie, you have scripts, you have subtitles. You have all this rich data that is time-synched in situations, says SRI senior computer scientist Mohamed Amer. The question could be, who was the lead actor in The Matrix?

Ideally, the system would not only deliver the correct answer, it would let users highlight certain sections of the questions and answers to see the sections of the script and film it used to figure out the answer.

You hover over a verb, for example, it will show you a pose of the person, for example, doing the action, Amer says. The idea is to kind of bring it down to an interpretable feature the person can actually visualize, where the person is not a machine-learning developer but is just a regular user of the system.

Steven Melendez is an independent journalist living in New Orleans.

More

Read the original here:

Why The Military And Corporate America Want To Make AI Explain Itself - Fast Company

Related Posts