Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Checking adversarial robustness

In the previous section, we discussed the importance of anticipating and monitoring drifts for any production-level ML system. Usually, this type of monitoring is done after the model has been deployed in production. But even before the model is deployed in production, it is extremely critical to check for the adversarial robustness of the model.

Most ML models are prone to adversarial attacks or an injection of noise to the input data, causing the model to fail by making incorrect predictions. The degree of adversarial attacks increases with the model's complexity, as complex models are very sensitive to noisy data samples. So, checking for adversarial robustness is about evaluating how sensitive the trained model is toward adversarial attacks.

In this section, first, we will try to understand the impact of adversarial attacks on the model and why this is important in the context of XAI. Then, we will discuss certain techniques that we...