Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Chapter 7: Practical Exposure to Using SHAP in ML

In the previous chapter, we discussed SHapley Additive exPlanation (SHAP), which is one of the most popular model explainability frameworks. We also covered a practical example of using SHAP for explaining regression models. However, SHAP can explain other types of models trained on different types of datasets. In the previous chapter, you did receive a brief conceptual understanding of the different types of explainers available in SHAP for explaining models trained on different types of datasets. But in this chapter, you will get the practical exposure needed to apply the various types of explainers available in SHAP.

More specifically, you learn how to apply TreeExplainers for explaining tree ensemble models trained on structured tabular data. You will also learn how to apply DeepExplainer and GradientExplainer SHAP with deep learning models trained on image data. As you learned in the previous chapter, the KernelExplainer in...