Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Potential pitfalls

In the previous section, we learned how easily the LIME Python framework can be used to explain black-box models for a classification problem. But unfortunately, the algorithm does have certain limitations, and there are a few scenarios in which the algorithm is not effective:

  • While providing interpretable explanations, a particular choice of interpretable data representation and interpretable model might still have a lot of limitations. While the underlying trained model might still be considered a black-box model, there is no assumption about the model that is made during the explanation process. However, certain representations are not powerful enough to represent some complex behaviors of the model. For example, if we are trying to build an image classifier to distinguish between black and white images and colored images, then the presence or absence of superpixels will not be useful to provide the explanations.
  • As discussed earlier, LIME learns...