Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Model explainability approaches using SHAP

After reading the previous section, you have gained an understanding of SHAP and Shapley values. In this section, we will discuss various model-explainability approaches using SHAP. Data visualization is an important method to explain the working of complex algorithms. SHAP makes use of various interesting data visualization techniques to represent the approximated Shapley values to explain black-box models. So, let's discuss some of the popular visualization methods used by the SHAP framework.

Visualizations in SHAP

As mentioned previously, SHAP can be used for both the global interpretability of the model and the local interpretability of the inference data instance. Now, the values generated by the SHAP algorithm are quite difficult to understand unless we make use of intuitive visualizations. The choice of visualization depends on the choice of global interpretability or local interpretability, which we will cover in this section...