DARPA Is Working to Make AI More Trustworthy – Futurism

Posted: June 10, 2017 at 7:09 pm

In Brief In order to probe the AI mind, DARPA is funding research by Oregon State University that will try to understand the reasoning behind decisions made by AI systems. DARPA hopes that this will make AI more trustworthy. Cracking the AI Black Box Artificial intelligence (AI) has grown by leaps and bounds over the past years. Now there are AI systems capable of driving carsand making medical diagnoses, as well as numerous other choices which people makeon a day-to-day basis. Except that when it comes to humans, we actually can understand the reasoning behind such decisions (to a certain extent).

When it comes to AI, however, theres a certain black box behind decisions that makes it so that even AI developers themselves dont quite understand or anticipate the decisions an AI is making. We do know that neural networks are taught to make these choices by exposing them to a huge data set. From there, AIs train themselves into applying what they learn. Its ratherdifficult to trust what one doesnt understand.

The U.S. Defense Advanced Research Projects Agency (DARPA) wants to break this black box, and the first step is to fund eight computer science professors from Oregon State University (OSU) with a $6.5 million research grant. Ultimately, we want these explanations to be very natural translating these deep network decisions into sentences and visualizations, OSUs Alan Fern, principal investigator for the grant, said in a press release.

The DARPA-OSU program, set to run for four years, will involve developing a system that will allow AI to communicate with machine learning experts. They would start developing this system by plugging AI-powered players into real-time strategy games like StarCraft. The AI players would be trained to explain to human players the reasoning behind their in-game choices. This isnt the first project that puts AIs into video game environments. Googles DeepMind has also chosen StarCraft as a training environment for AI. Theres also that controversial Doom-playing AI bot.

Results from this research project would then be applied by DARPA to their existing work with robotics and unmanned vehicles. Obviously, the potential applications of AI in law enforcement and the militaryrequire these systems to be ethical.

Nobody is going to use these emerging technologies for critical applications until we are able to build some level of trust, and having an explanation capability is one important way of building trust, Fern said. Thankfully, this DARPA-OSU project isnt the only one working on humanizing AI to make it more trustworthy.

Read the rest here:

DARPA Is Working to Make AI More Trustworthy - Futurism

Related Posts