Book Image

Hands-On Explainable AI (XAI) with Python

By : Denis Rothman
Book Image

Hands-On Explainable AI (XAI) with Python

By: Denis Rothman

Overview of this book

Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex. Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications. You will build XAI solutions in Python, TensorFlow 2, Google Cloud’s XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle. You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces. By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.
Table of Contents (16 chapters)
14
Other Books You May Enjoy
15
Index

A cognitive approach to vectorizers

AI and XAI outperform us in many cases. This is a good thing because that's what we designed them for! What would we do with slow and imprecise AI?

However, in some cases, we not only request an AI explanation, but we also need to understand it.

In Chapter 8, Local Interpretable Model-Agnostic Explanations (LIME), we reached several interesting conclusions. However, we left with an intriguing comment on the dataset.

In this section, we will use our human cognitive abilities, not only to explain, but to understand the third of the conclusions we made in Chapter 8:

  1. LIME can prove that even accurate predictions cannot be trusted without XAI
  2. Local interpretable models will measure to what extent we can trust a prediction
  3. Local explanations might show that the dataset cannot be trusted to produce reliable predictions
  4. Explainable AI can prove that a model cannot be trusted or that it is reliable
  5. LIME...