Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Emphasizing prescriptive insights for explainability

Prescriptive insight is a popular jargon used in data analysis. It means providing actionable recommendations derived from the dataset to achieve the desired outcome. It is often considered to be a catalyst in the entire process of data-driven decision-making. In the context of XAI, explanation methods such as counterfactual examples, data-centric XAI, and what-if analysis are prominently used for providing actionable suggestions to the user.

Along with counterfactuals, the concept of actionable recourse in ML is also used for generating prescriptive insights. Actionable recourse is the ability of a user to alter the prediction of an ML model by modifying the features that are actionable. But how is it different from counterfactuals? Actionable recourse can be considered to be an extension of the idea of counterfactual examples, which uses actionable features instead of all the features present in the dataset.

Now, what do...