Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

What this book covers

Chapter 1, Foundational Concepts of Explainability Techniques, gives the necessary exposure to Explainable AI and help you understand it's importance. This chapter covers various terminology and concepts related to explainability techniques, which is frequently used throughout this book. This chapter also covers the key criteria of human-friendly explainable ML systems and different approaches to evaluating the quality of the explainability techniques.

Chapter 2, Model Explainability Methods, discusses the various model explainability methods used for explaining black-box models. Some of these are model agnostic, some are model specific. Some of these methods provide global interpretability while others provide local interpretability. This chapter will introduce you to a variety of techniques that can be used for explaining ML models and provides recommendation for the right choice of explainability method.

Chapter 3, Data-Centric Approaches, introduces the concept of data-centric XAI. This chapter covers various techniques to explain the working of ML systems in terms of the properties of the data, data volume, data consistency, data purity and actionable insights generated from the underlying training dataset.

Chapter 4, LIME for Model Interpretability, covers the application of one of the most popular XAI frameworks, called LIME. This chapter discusses about the intuition behind the working of the LIME algorithm and some important properties of the algorithm which makes the generated explanations human-friendly. Certain advantages and limitations of the LIME algorithm are also discussed in this chapter, along with a code tutorial for applying LIME for a classification problem.

Chapter 5, Practical Exposure to Using LIME in ML is an extension of the previous chapter, but more focused towards the practical applications of the LIME Python framework on different types of datasets like images, texts along with structured tabular data. Practical code examples are also covered in this chapter for providing exposure to on-hand knowledge using Python LIME framework. This chapter also covers if LIME is a good fit for production-level ML systems.

Chapter 6, Model Interpretability Using SHAP focuses on understanding the importance of the SHAP Python framework for model explainability. It covers the intuitive understanding of Shapley values and SHAP. This chapter also discusses how to use SHAP for model explainability through a variety of visualization and explainer methods. A code walkthrough for using SHAP to explain regression models is also covered in this chapter. Finally, we will discuss the key advantages and limitations of SHAP.

Chapter 7, Practical Exposure to Using SHAP in ML provides the necessary practical exposure of using SHAP with tabular structured data as well unstructured data like images and texts. We have discussed about the different explainers available in SHAP for both model-specific and model agnostic explainability. We have applied SHAP for explaining linear models, tree ensemble models, convolution neural network models and even transformer models in this chapter. Necessary code tutorials are also covered in this chapter for providing exposure to hands-on knowledge using Python SHAP framework.

Chapter 8, Human-Friendly Explanations with TCAV covers the concepts of TCAV, a framework developed by Google AI. This chapter provides both conceptual understanding of TCAV and practical exposure to applying the Python TCAV framework. The key advantages and limitations of TCAV are discussed along with interesting ideas about potential research problems that can be solved using concept-based explanations are discussed in the chapter. 

Chapter 9, Other Popular XAI Frameworks covers about seven popular XAI frameworks available in Python – DALEX, Explainerdashboard, InterpretML, ALIBI, DiCE, ELI5, and H2O AutoML explainers. We have discussed about the supported explanation methods for each of the framework, practical application, and the various pros and cons of each framework. This chapter also provides a quick comparison guide for helping you decide which framework you should go for considering your own use-case.

Chapter 10, XAI Industry Best Practices focuses on the best practices for designing explainable AI systems for industrial problems. In this chapter, we have discussed about the open challenges of XAI and necessary design guidelines for explainable ML systems, considering the open challenges. We have also highlighted the importance of considering data-centric approaches of explainability, interactive machine learning and prescriptive insights for designing explainable AI/ML systems.

Chapter 11, End User-Centered Artificial Intelligence introduces the ideology of end user centered artificial intelligence (ENDURANCE) for the design and development of explainable AI/ML Systems. We have discussed about the importance of using XAI to steer towards the main goals of the end user for building explainable AI/ML systems. Using some of principles and recommended best practices presented in the chapter, we can bridge the gap between AI and the end user to a great extent!