Book Image

Interpretable Machine Learning with Python

By : Serg Masís
Book Image

Interpretable Machine Learning with Python

By: Serg Masís

Overview of this book

Do you want to gain a deeper understanding of your models and better mitigate poor prediction risks associated with machine learning interpretation? If so, then Interpretable Machine Learning with Python deserves a place on your bookshelf. We’ll be starting off with the fundamentals of interpretability, its relevance in business, and exploring its key aspects and challenges. As you progress through the chapters, you'll then focus on how white-box models work, compare them to black-box and glass-box models, and examine their trade-off. You’ll also get you up to speed with a vast array of interpretation methods, also known as Explainable AI (XAI) methods, and how to apply them to different use cases, be it for classification or regression, for tabular, time-series, image or text. In addition to the step-by-step code, this book will also help you interpret model outcomes using examples. You’ll get hands-on with tuning models and training data for interpretability by reducing complexity, mitigating bias, placing guardrails, and enhancing reliability. The methods you’ll explore here range from state-of-the-art feature selection and dataset debiasing methods to monotonic constraints and adversarial retraining. By the end of this book, you'll be able to understand ML models better and enhance them through interpretability tuning.
Table of Contents (19 chapters)
1
Section 1: Introduction to Machine Learning Interpretation
5
Section 2: Mastering Interpretation Methods
12
Section 3:Tuning for Interpretability

Chapter 4: Fundamentals of Feature Importance and Impact

In the first part of this book, we introduced the concepts, challenges, and purpose of machine learning interpretation. This chapter kicks off the second part, which dives into a vast array of methods that are used to diagnose models and understand their underlying data. One of the biggest questions answered by interpretation methods is: What matters most to the model and how does it matter? Precisely, interpretation methods can shed light on the overall importance of features and how they—individually or combined—impact a model's outcome. This chapter will provide a theoretical and practical foundation to approach these questions.

In this chapter, we will first use several scikit-learn models' intrinsic parameters to derive the most important features. Then, realizing how inconsistent these results are, we will learn how to use Permutation Feature Importance (PFI) to rank the features intuitively and...