Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Summary

After reading this chapter, you have received some practical exposure to using SHAP with tabular structured data as well as unstructured data such as images and texts. We have discussed the different explainers available in SHAP for both model-specific and model-agnostic explainability. We have applied SHAP to explain linear models, tree ensemble models, convolution neural network models, and even transformer models in this chapter. Using SHAP, we can explain different types of models trained on different types of data. I highly recommend trying out the end-to-end tutorials provided in the GitHub code repository and exploring things in more depth to acquire deeper practical knowledge.

In the next chapter, we will discuss another interesting topic of concept activation vectors and explore the practical part of applying the Testing with Concept Activation Vectors (TCAV) framework from Google AI for explaining models with human-friendly concepts.