Book Image

Interpretable Machine Learning with Python - Second Edition

By : Serg Masís
4 (4)
Book Image

Interpretable Machine Learning with Python - Second Edition

4 (4)
By: Serg Masís

Overview of this book

Interpretable Machine Learning with Python, Second Edition, brings to light the key concepts of interpreting machine learning models by analyzing real-world data, providing you with a wide range of skills and tools to decipher the results of even the most complex models. Build your interpretability toolkit with several use cases, from flight delay prediction to waste classification to COMPAS risk assessment scores. This book is full of useful techniques, introducing them to the right use case. Learn traditional methods, such as feature importance and partial dependence plots to integrated gradients for NLP interpretations and gradient-based attribution methods, such as saliency maps. In addition to the step-by-step code, you’ll get hands-on with tuning models and training data for interpretability by reducing complexity, mitigating bias, placing guardrails, and enhancing reliability. By the end of the book, you’ll be confident in tackling interpretability challenges with black-box models using tabular, language, image, and time series data.
Table of Contents (17 chapters)
15
Other Books You May Enjoy
16
Index

Mitigating bias

We can mitigate bias at three different levels with methods that operate at these individual levels:

  • Preprocessing: These are interventions to detect and remove bias from the training data before training the model. Methods that leverage pre-processing have the advantage that they tackle bias at the source. On the other hand, any undetected bias could still be amplified by the model.
  • In-processing: These methods mitigate bias during the model training and are, therefore, highly dependent on the model and tend to not be model-agnostic like the pre-processing and post-processing methods. They also require hyperparameter tuning to calibrate fairness metrics.
  • Post-processing: These methods mitigate bias during model inference. In Chapter 6, Anchors and Counterfactual Explanations, we touched on the subject of using the What-If tool to choose the right thresholds (see Figure 6.13 in that chapter), and we manually adjusted them to achieve parity with...