Three Methods Researchers Use To Understand AI Decisions – RTInsights

Posted: August 20, 2022 at 2:19 pm

Making sense of AI decisions is important to researchers, decision-makers, and the wider public. Fortunately, there are methods available to ensure we know more.

Deep-learning models, of the type that are used by leading-edge AI corporations and academics, have become so complex that even the researchers that built the models struggle to understand decisions being made.

This was shown most clearly to a wide audience during DeepMinds AlphaGo tournament, in which data scientists and pro-Go players were regularly bamboozled by the AIs decision-making during the game, as it made unorthodox plays which were not considered the strongest move.

SEE ALSO: Artificial Intelligence More Accepted Post-Covid According to Study

In an attempt to better understand the models they build, AI researchers have developed three main explanation methods. These are local explanation methods, which explain one specific decision, rather than the decision making for an entire model, which can be challenging given the scale.

Yilun Zhou, a graduate student in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL), discussed these methods in a MIT News article.

Feature attribution

With feature attribution, an AI model will identify which parts of an input were important to a specific decision. In the case of an x-ray, researchers can see a heatmap or the individual pixels that the model perceived as most important to making its decision.

Using this feature attribution explanation, you can check to see whether a spurious correlation is a concern. For instance, it will show if the pixels in a watermark are highlighted or if the pixels in an actual tumor are highlighted, said Zhou.

Counterfactual explanation

When coming to a decision, the human on the other side may be confused as to why an AI has decided one way or the other. As AI is being deployed in high-stakes environments, such as in prisons, insurance, or mortgages, knowing why an AI rejected an offer or appeal should help them attain approval the next time they apply.

The good thing about the [counterfactual] explanation method is it tells you exactly how you need to change the input to flip the decision, which could have practical usage. For someone who is applying for a mortgage and didnt get it, this explanation would tell them what they need to do to achieve their desired outcome, said Zhou.

Sample importance

Sample importance explanation requires access to the underlying data behind the model. If a researcher notices what they perceive to be an error, they can run a sample importance explanation to see if the AI was fed data that it couldnt compute, which led to an error in judgment.

See the original post here:

Three Methods Researchers Use To Understand AI Decisions - RTInsights

Related Posts