Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Chapter 8: Human-Friendly Explanations with TCAV

In the previous few chapters, we have extensively discussed LIME and SHAP. You have also seen the practical aspect of applying the Python frameworks of LIME and SHAP to explain black-box models. One major limitation of both frameworks is that the method of explanation is not extremely consistent and intuitive with how non-technical end users would explain an observation. For example, if you have an image of a glass filled with Coke and use LIME and SHAP to explain a black-box model used to correctly classify the image as Coke, both LIME and SHAP would highlight regions of the image that lead to the correct prediction by the trained model. But if you ask a non-technical user to describe the image, the user would classify the image as Coke due to the presence of a dark-colored carbonated liquid in a glass that resembles a Cola drink. In other words, human beings tend to relate any observation with known concepts to explain it.

Testing...