Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Efforts toward increasing user acceptance of AI/ML systems using XAI

In this section, we will discuss some recommended practices to increase the acceptance of AI/ML systems using XAI. In most software systems, the User Acceptance Testing (UAT) phase is used to determine the go or no-go for software. Similarly, before the final production phase, more and more organizations prefer doing a robust UAT process for AI/ML systems. But how important is the explainability of AI algorithms, when doing UAT of AI/ML systems? Can explainability increase the user acceptance of AI? The short answer is yes! Let's go through the following points to understand why:

  • User acceptance is a testimony of the user's trust – Since XAI can increase the user's trust in AI, it increases the chance of the user's acceptance of the solution. Now, trust is something that cannot just be established during the UAT phase; rather, trust should be established from the beginning and maintained...