Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

SP-LIME

In order to make explanation methods more trustworthy, providing an explanation to a single data instance (that is, a local explanation) is not always sufficient, and the end user might want a global understanding of the model to have higher reliability on the robustness of the model. So, the SP-LIME algorithm tries to run the explanations on multiple diverse, yet carefully selected, sets of instances and returns non-redundant explanations.

Now, let me provide an intuitive understanding of the SP-LIME algorithm. The algorithm considers that the time required to go through all the individual local explanations is limited and is a constraint. So, the number of explanations that the end users are willing to examine to explain a model is the budget of the algorithm denoted by B. Let's suppose that X denotes the set of instances; the task of selecting B instances for the end user to analyze for model explainability is defined as the pick step. The pick step is independent...