Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Chapter 2: Model Explainability Methods

One of the key goals of this book is to empower its readers to design Explainable ML systems that can be used in production to solve critical business problems. For a robust Explainable ML system, explainability can be provided in multiple ways depending on the type of problem and the type of data used. Providing explainability for structured tabular data is relatively human-friendly compared to unstructured data such as images and text, as image or text data is more complex with less interpretable granular features.

There are different ways to add explainability to ML models, for instance, by extracting information about the data or the model (knowledge extraction), using effective visualizations to justify the prediction outcomes (result visualization), identifying dominant features in the training data and analyzing its effect on the model predictions (influence-based methods), or by comparing model outcomes with known scenarios or situations...