Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Exploring LinearExplainer in SHAP

LinearExplainer in SHAP is particularly developed for linear machine learning models. In the previous section, we have seen that although KernelExplainer is model-agnostic, it can be very slow. So, I think that is one of the main motivations behind using LinearExplainer to explain a linear model with independent features and even consider feature correlation. In this section, we will discuss applying the LinearExplainer method in practice. The detailed notebook tutorial is available at https://github.com/PacktPublishing/Applied-Machine-Learning-Explainability-Techniques/blob/main/Chapter07/LinearExplainers.ipynb. We have used the same Red Wine Quality dataset as used for the tutorial discussed in Chapter 6, Model Interpretability Using SHAP. You can refer to the same tutorial to learn more about the dataset as we will only focus on the LinearExplainer application part in this section.

Application of LinearExplainer in SHAP

For this example, we...