Book Image

Hands-On Explainable AI (XAI) with Python

By : Denis Rothman
Book Image

Hands-On Explainable AI (XAI) with Python

By: Denis Rothman

Overview of this book

Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex. Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications. You will build XAI solutions in Python, TensorFlow 2, Google Cloud’s XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle. You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces. By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.
Table of Contents (16 chapters)
14
Other Books You May Enjoy
15
Index

Creating a SHAP explainer

In this section, we will create a SHAP explainer.

We described SHAP in detail in Chapter 4, Microsoft Azure Machine Learning Model Interpretability with SHAP. If you wish, you can go through that chapter again before moving on.

We pass a subset of our training data to the explainer:

# Create a SHAP explainer by passing a subset of our training data
import shap
sample_size = 500
if sample_size > len(train_data.values):
  sample_size = len(train_data.values)
explainer = shap.DeepExplainer(model,
    train_data.values[:sample_size])

We will now generate a plot of Shapley values.

The plot of Shapley values

In Chapter 4, Microsoft Azure Machine Learning Model Interpretability with SHAP, we learned that Shapley values measure the marginal contribution of a feature to the output of an ML model. We also created a plot. In this case, the SHAP plot contains all of the predictions, not only one prediction at a time.

As in Chapter 4, we...