Book Image

Hands-On Explainable AI (XAI) with Python

By : Denis Rothman
Book Image

Hands-On Explainable AI (XAI) with Python

By: Denis Rothman

Overview of this book

Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex. Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications. You will build XAI solutions in Python, TensorFlow 2, Google Cloud’s XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle. You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces. By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.
Table of Contents (16 chapters)
14
Other Books You May Enjoy
15
Index

Defining explainable AI

Explainable AI, or AI explaining, or AI explainability, or simply XAI, seems simple. You just take an AI algorithm and explain it. It seems so elementary that you might even wonder why we are bothering to write a book on this!

Before the rise of XAI, the typical AI workflow was minimal. The world and activities surrounding us produce datasets. These datasets were put through black-box AI algorithms, not knowing what was inside. Finally, human users had to either trust the system or initiate an expensive investigation. The following diagram represents the former AI process:

Figure 1.1: AI process

In a non-XAI approach, the user is puzzled by the output. The user does not trust the algorithm and does not understand from the output whether the answer is correct or not. Furthermore, the user does not know how to control the process.

In a typical XAI approach, the user obtains answers, as shown in the following diagram. The user trusts the algorithm. Because the user understands how a result was obtained, the user knows whether the answer is correct or not. Furthermore, the user can understand and control the process through an interactive explanation interface:

Figure 1.2: XAI process

The typical XAI flow takes information from the world and the activities that occur in it to produce input datasets to extract information from, to build, in turn, a white box algorithm that allows AI explainability. The user can consult an interface that accesses interpretable AI models.

The XAI phase will help the users of AI understand the processes, build up trust in the AI systems, and speed AI projects up. If the developers do not implement XAI, they will encounter the ethical and legal stumbling blocks described in Chapter 2, White Box XAI for AI Bias and Ethics.

Understanding the concept of XAI is quite simple, as we just saw. But in AI, once you begin digging into a subject a bit, you always discover some complexity that was not immediately apparent!

Let's now dig into how XAI works to discover a fascinating new area in AI.

We just saw that XAI was located right after the AI black box and before a human interface. But is that always the case? In the following section, we will first start by looking into the black box, then explore interpretability and explainability. Finally, we will see when to extract information for XAI and when to build XAI right into an AI model.

Let's first define what looking into a black box algorithm means.

Going from black box models to XAI white box models

Common sense tells us that XAI should be located right after a black box AI algorithm, as shown in the following diagram:

Figure 1.3: Black box AI model

But we must first define what a black box is.

The definition of a black box that applies to AI in mainstream literature is a system whose internal workings are hidden and not understood. A black box AI model takes an input, runs an algorithm or several algorithms, and produces an output that might work, but remains obscure. This definition somewhat fits the expression "black box AI."

However, there is another formal definition of a black box that contradicts this one!

This other definition reflects a conflicting concept, referring to the flight recorder in an aircraft. In this case, the black box records all of the information in real time so that a team of experts can analyze the timeline of a given flight in minute detail.

In this case, a black box contains detailed information. This definition contradicts the algorithm definition! The use of a black box as a way to record important information, as in the case of an aircraft, is similar to software logging.

A log file, for example, records events, messages between systems, and any other type of information the designers of a system saw fit to include in the process. We refer to logging as the action of recording information and the log as the file or table we store the logged information in.

We can use a software log file as the equivalent of an aircraft flight recorder when applying the concept to software. When we talk about and use logs and log files in this sense, we will not use the term "black box" in order to avoid conflicting uses of the expression.

When we use a log file or any other means of recording or extracting information, we will use the term white box. We will refer to white box models as ones containing information on the inner working of the algorithms.

Once we have access to the information of a white box model, we will have to explain and interpret the data provided.

Explaining and interpreting

Explaining makes something understandable and something unclear plain to see. Interpreting tells us the meaning of something.

For example, a teacher will explain a sentence in a language using grammar. The words in the sentence are accepted the way they are and explained. We take the words at face value.

When the teacher tries to explain a difficult line of poetry, the same teacher will interpret the ideas that the poet meant to express. The words in the sentence are not accepted the way they are. They require interpretation.

When applied to AI, explaining a KNN algorithm, for example, means that we will take the concepts at face value. We might say, "a KNN takes a data point and finds the closest data points to decide which class it is in."

Interpreting a KNN is going beyond a literal explanation. We might say, "The results of a KNN seem consistent, but sometimes a data point might end up in another class because it is close to two classes with similar features. In this case, we should add some more distinctive features." We have just interpreted results and explained the KNN mathematically.

The definitions of explaining and interpreting are very close. Just bear in mind that interpretation goes beyond explaining when it becomes difficult to understand something, and deeper clarification is required.

At this point, we know that a white box AI model generates explainable information. If the information is difficult to understand or obscure, then interpretation is required.

We'll now see whether the information we want to use in a white box AI model should be designed from the start or extracted along the way.