Explainable AI: The push to make sure machines don’t learn to be racist – CTV News

Posted: July 5, 2017 at 9:14 am

Growing concerns about how artificial intelligence (AI) makes decisions has inspired U.S. researchers to make computers explain their thinking.

Computers are going to become increasingly important parts of our lives, if they arent already, and the automation is just going to improve over time, so its increasingly important to know why these complicated systems are making the decisions that they are, assistant professor of computer science at the University of California Irvine, Sameer Singh, told CTVs Your Morning on Tuesday.

Singh explained that, in almost every application of machine learning and AI, there are cases where the computers do something completely unexpected.

Sometimes its a good thing, its doing something much smarter than we realize, he said. But sometimes its picking up on things that it shouldnt.

Such was the case with the Microsoft AI chatbot, Tay, which became racist in less than a day. Another high-profile incident occurred in 2015, when Googles photo app mistakenly labelled a black couple as gorillas.

Singh says incidents like that can happen because the data AI learns from is based on humans; either decisions humans made in the past or basic social-economic structures that appear in the data.

When machine learning models use that data they tend to inherit those biases, said Singh.

In fact, it can get much worse where if the AI agents are part of a loop where theyre making decisions, even the future data, the biases get reinforced, he added.

Researchers hope that, by seeing the thought process of the computers, they can make sure AI doesnt pick up any gender or racial biases that humans have.

However, Googles research director Peter Norvig cast doubt on the concept of explainable AI.

You can ask a human, but, you know, what cognitive psychologists have discovered is that when you ask a human youre not really getting at the decision process. They make a decision first, and then you ask, and then they generate an explanation and that may not be the true explanation, he said at an event in June in Sydney, Australia.

So we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say given the input of this first system, now its your job to generate an explanation.

Norvig suggests looking for patterns in the decisions themselves, rather than the inner workings behind them.

But Singh says understanding the decision process is critical for future use, particularly in cases where AI is making decisions, like approving loan applications, for example.

Its important to know what details theyre using. Not just if theyre using your race column or your gender column but are they using proxy signals like your location, which we know it could be an indicator of race or other problematic attributes, explained Singh.

Over the last year theres been multiple efforts to find out how to better explain the rational of AI.

Currently, The Defense Advanced Research Projects Agency (DARPA) is funding 13 different research groups, which are pursuing a range of approaches to making AI more explainable.

The rest is here:

Explainable AI: The push to make sure machines don't learn to be racist - CTV News

Related Posts