Book Image

Hands-On Explainable AI (XAI) with Python

By : Denis Rothman
Book Image

Hands-On Explainable AI (XAI) with Python

By: Denis Rothman

Overview of this book

Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex. Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications. You will build XAI solutions in Python, TensorFlow 2, Google Cloud’s XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle. You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces. By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.
Table of Contents (16 chapters)
14
Other Books You May Enjoy
15
Index

Preface

In today's era of AI, accurately interpreting and communicating trustworthy AI findings is becoming a crucial skill to master. Artificial intelligence often surpasses human understanding. As such, the results of machine learning models can often prove difficult and sometimes impossible to explain. Both users and developers face challenges when asked to explain how and why an AI decision was made.

The AI designer cannot possibly design a single explainable AI solution for the hundreds of machine learning and deep learning models. Effectively translating AI insights to business stakeholders requires individual planning, design, and visualization choices. European and US law has opened the door to litigation when results cannot be explained, but developers face overwhelming amounts of data and results in real-life implementations, making it nearly impossible to find explanations without the proper tools.

In this book, you will learn about tools and techniques using Python to visualize, explain, and integrate trustworthy AI results to deliver business value, while avoiding common issues with AI bias and ethics.

Throughout the book, you will work with hands-on Python machine learning projects in Python and TensorFlow 2.x. You will learn how to use WIT, SHAP, LIME, CEM, and other key explainable AI tools. You will explore tools designed by IBM, Google, Microsoft, and other advanced AI research labs.

You will be introduced to several open source explainable AI tools for Python that can be used throughout the machine learning project lifecycle. You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting machine learning model visualizations in user explainable interfaces.

We will build XAI solutions in Python and TensorFlow 2.x, and use Google Cloud's XAI platform and Google Colaboratory.