Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Explaining image classifiers with LIME

In the previous section, we have seen how we can easily apply LIME to explain models trained on tabular data. However, the main challenge always comes while explaining complex deep learning models trained on unstructured data such as images. Generally, deep learning models are much more efficient than conventional ML models on image data as these models have the ability to perform auto feature extraction. They can extract complex low-level features such as stripes, edges, contours, corners, and motifs, and even higher-level features such as larger shapes and certain parts of the object. These higher-level features are usually referred to as Regions of Interest (RoI) in the image, or superpixels, as they are collections of pixels of the image that cover a particular area of the image. Now, the low-level features are not human-interpretable, but the high-level features are human-interpretable, as any non-technical end user will relate to the images...