Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Influence-based methods

Influence-based methods are used to understand the impact of features present in the dataset on the model's decision-making process. Influence-based methods are widely used and preferred in comparison to other methods as this helps to identify the dominating attributes from the dataset. Identifying the dominating attributes from structured and unstructured data helps us analyze the dominating features' role in influencing the model outcome.

For example, let's say you are working on a classification problem for classifying wolves and Siberian huskies. Let's suppose that after the training and evaluation process, you have achieved a good model with more than 95% accuracy. But when trying to find the important features using influence-based methods for model explainability, you observed that the model picked up the surrounding background as the dominating feature to classify whether it is a wolf or a husky. In such cases, even if your model...