Altruist: A New Method To Explain Interpretable Machine Learning Through Local Interpretations of Predictive Models – MarkTechPost

Artificial intelligence (AI) and machine learning (ML) are the digital worlds trendsetters in recent times. Although ML models can make accurate predictions, the logic behind the predictions remains unclear to the users. Lack of evaluation and selection criteria make it difficult for the end-user to select the most appropriate interpretation technique.

How do we extract insights from the models? Which features should be prioritized while making predictions and why? These questions remain prevalent. Interpretable Machine Learning (IML) is an outcome of the questions mentioned above. IML is a layer in ML models that helps human beings understand the procedure and logic behind machine learning models inner working.

Ioannis Mollas, Nick Bassiliades, and Grigorios Tsoumakas have introduced a new methodology to make IML more reliable and understandable for end-users.Altruist, a meta-learning method, aims to help the end-user choose an appropriate technique based on feature importance by providing interpretations through logic-based argumentation.

The meta-learning methodology is composed of the following components:

Paper: https://arxiv.org/pdf/2010.07650.pdf

Github: https://github.com/iamollas/Altruist

Related

Consulting Intern: Grounded and solution--oriented Computer Engineering student with a wide variety of learning experiences. Passionate about learning new technologies and implementing it at the same time.

Continued here:
Altruist: A New Method To Explain Interpretable Machine Learning Through Local Interpretations of Predictive Models - MarkTechPost

Related Posts
This entry was posted in $1$s. Bookmark the permalink.