Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

An intuitive understanding of the SHAP and Shapley values

As discussed in Chapter 1, Foundational Concepts of Explainability Techniques, explaining black-box models is a necessity for increasing AI adoption. Algorithms that are model-agnostic and can provide local explainability with a global perspective are the ideal choice of explainability technique in machine learning (ML) . That is why LIME is a popular choice in XAI. SHAP is another popular choice of explainability technique in ML and, in certain scenarios, is more effective than LIME. In this section, we will discuss about the intuitive understanding of the SHAP framework along with how it functions for providing model explainability.

Introduction to SHAP and Shapley values

The SHAP framework was introduced by Scott Lundberg and Su-In Lee in their research work, A Unified Approach of Interpreting Model Predictions (https://arxiv.org/abs/1705.07874). This was published in 2017. SHAP is based on the concept of Shapley values...