Book Image

Hands-On Explainable AI (XAI) with Python

By : Denis Rothman
Book Image

Hands-On Explainable AI (XAI) with Python

By: Denis Rothman

Overview of this book

Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex. Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications. You will build XAI solutions in Python, TensorFlow 2, Google Cloud’s XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle. You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces. By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.
Table of Contents (16 chapters)
14
Other Books You May Enjoy
15
Index

What this book covers

Chapter 1, Explaining Artificial Intelligence with Python

Explainable AI (XAI) cannot be summed up in a single method for all participants in a project. When a patient shows signs of COVID-19, West Nile Virus, or any other virus, how can a general practitioner and AI form a cobot to determine the origin of the disease? The chapter describes a case study and an AI solution built from scratch, to trace the origins of a patient's infection with a Python solution that uses k-nearest neighbors and Google Location History.

Chapter 2, White Box XAI for AI Bias and Ethics

Artificial intelligence might sometimes have to make life or death decisions. When the autopilot of an autonomous vehicle detects pedestrians suddenly crossing a road, what decision should be made when there is no time to stop?

Can the vehicle change lanes without hitting other pedestrians or vehicles? The chapter describes the MIT moral machine experiment and builds a Python program using decision trees to make real-life decisions.

Chapter 3, Explaining Machine Learning with Facets

Machine learning is a data-driven training process. Yet, companies rarely provide clean data or even all of the data required to start a project. Furthermore, the data often comes from different sources and formats. Machine learning models involve complex mathematics, even when the data seems acceptable. A project can rapidly become a nightmare from the start.

This chapter implements Facets in Python in a Jupyter Notebook on Google Colaboratory. Facets provides multiple views and tools to track the variables that distort the ML model's results. Finding counterfactual data points, and identifying the causes, can save hours of otherwise tedious classical analysis.

Chapter 4, Microsoft Azure Machine Learning Model Interpretability with SHAP

Artificial intelligence designers and developers spend days searching for the right ML model that fits the specifications of a project. Explainable AI provides valuable time-saving information. However, nobody has the time to develop an explainable AI solution for every single ML model on the market!

This chapter introduces model-agnostic explainable AI through a Python program that implements Shapley values with SHAP based on Microsoft Azure's research. This game theory approach provides explanations no matter which ML model it faces. The Python program provides explainable AI graphs showing which variables influence the outcome of a specific result.

Chapter 5, Building an Explainable AI Solution from Scratch

Artificial intelligence has progressed so fast in the past few years that moral obligations have sometimes been overlooked. Eradicating bias has become critical to the survival of AI. Machine learning decisions based on racial or ethnic criteria were once accepted in the United States; however, it has now become an obligation to track bias and eliminate those features in datasets that could be using discrimination as information.

This chapter shows how to eradicate bias and build an ethical ML system in Python with Google's What-If Tool and Facets. The program will take moral, legal, and ethical parameters into account from the very beginning.

Chapter 6, AI Fairness with Google's What-If Tool (WIT)

Google's PAIR (People + AI Research – https://research.google/teams/brain/pair/) designed What-If Tool (WIT) to investigate the fairness of an AI model. This chapter takes us deeper into Explainable AI, introducing a Python program that creates a deep neural network (DNN) with TensorFlow, uses a SHAP explainer and creates a WIT instance.

The WIT will provide ground truth, cost ration fairness, and PR curve visualizations. The Python program shows how ROC curves, AUC, slicing, and PR curves can pinpoint the variables that produced a result, using AI fairness and ethical tools to make predictions.

Chapter 7, A Python Client for Explainable AI Chatbots

The future of artificial intelligence will increasingly involve bots and chatbots. This chapter shows how chatbots can provide a CUI XAI through Google Dialogflow. A Google Dialogflow Python client will be implemented with an API that communicates with Google Dialogflow.

The goal is to simulate user interactions for decision-making XAI based on the Markov Decision Process (MDP). The XAI dialog is simulated in a Jupyter Notebook, and the agent is tested on Google Assistant.

Chapter 8, Local Interpretable Model-Agnostic Explanations (LIME)

This chapter takes model agnostics further with Local Interpretable Model-agnostic Explanations (LIME). The chapter shows how to create a model-agnostic explainable AI Python program that can explain the results of random forests, k-nearest neighbors, gradient boosting, decision trees, and extra trees.

The Python program creates a unique LIME explainer with visualizations no matter which ML model produces the results.

Chapter 9, The Counterfactual Explanations Method

It is sometimes impossible to find why a data point has not been classified as expected. No matter how we look at it, we cannot determine which feature or features generated the error.

Visualizing counterfactual explanations can display the features of a data point that has been classified in the wrong category right next to the closest data point that was classified in the right category. An explanation can be rapidly tracked down with the Python program created in this chapter with a WIT.

The Python program created in this chapter's WIT can define the belief, truth, justification, and sensitivity of a prediction.

Chapter 10, Contrastive XAI

Sometimes, even the most potent XAI tools cannot pinpoint the reason an ML program made a decision. The Contrastive Explanation Method (CEM) implemented in Python in this chapter will find precisely how a datapoint crossed the line into another class.

The program created in this chapter prepares a MNIST dataset for CEM, defines a CNN, tests the accuracy of the CNN, and defines and trains an auto-encoder. From there, the program creates a CEM explainer that will provide visual explanations of pertinent negatives and positives.

Chapter 11, Anchors XAI

Rules have often been associated with hard coded expert system rules. But what if an XAI tool could generate rules automatically to explain a result? Anchors are high-precision rules that are produced automatically.

This chapter's Python program creates anchors for text classification and images. The program pinpoints the precise pixels of an image that made a model change its mind and select a class.

Chapter 12, Cognitive XAI

Human cognition has provided the framework for the incredible technical progress made by humanity in the past few centuries, including artificial intelligence. This chapter puts human cognition to work to build cognitive rule bases for XAI.

The chapter explains how to build a cognitive dictionary and a cognitive sentiment analysis function to explain the marginal features from a human perspective. A Python program shows how to measure marginal cognitive contributions.

This chapter sums up the essence of XAI, for the reader to build the future of artificial intelligence, containing real human intelligence and ethics.