Book Image

Deep Learning and XAI Techniques for Anomaly Detection

By : Cher Simon
Book Image

Deep Learning and XAI Techniques for Anomaly Detection

By: Cher Simon

Overview of this book

Despite promising advances, the opaque nature of deep learning models makes it difficult to interpret them, which is a drawback in terms of their practical deployment and regulatory compliance. Deep Learning and XAI Techniques for Anomaly Detection shows you state-of-the-art methods that’ll help you to understand and address these challenges. By leveraging the Explainable AI (XAI) and deep learning techniques described in this book, you’ll discover how to successfully extract business-critical insights while ensuring fair and ethical analysis. This practical guide will provide you with tools and best practices to achieve transparency and interpretability with deep learning models, ultimately establishing trust in your anomaly detection applications. Throughout the chapters, you’ll get equipped with XAI and anomaly detection knowledge that’ll enable you to embark on a series of real-world projects. Whether you are building computer vision, natural language processing, or time series models, you’ll learn how to quantify and assess their explainability. By the end of this deep learning book, you’ll be able to build a variety of deep learning XAI models and perform validation to assess their explainability.
Table of Contents (15 chapters)
1
Part 1 – Introduction to Explainable Deep Learning Anomaly Detection
4
Part 2 – Building an Explainable Deep Learning Anomaly Detector
8
Part 3 – Evaluating an Explainable Deep Learning Anomaly Detector

What this book covers

Chapter 1, Understanding Deep Learning Anomaly Detection, describes types of anomalies and real-world use cases for anomaly detection. It provides two PyOD example walk-throughs to illustrate fundamental concepts, including challenges, opportunities, and considerations when using deep learning for anomaly detection.

Chapter 2, Understanding Explainable AI, covers an overview of XAI, including its evolution since the US Defense Advanced Research Project Agency (DARPA) initiative, its significance in the Right to Explanation and regulatory compliance context, and a holistic approach to the XAI life cycle.

Chapter 3, Natural Language Processing Anomaly Explainability, dives deep into finding anomalies within textual data. You will complete two NLP example walk-throughs to detect anomalies using AutoGluon and Cleanlab and explain the model’s output using SHapley Additive exPlanations (SHAP).

Chapter 4, Time Series Anomaly Explainability, introduces concepts and approaches to detecting anomalies within time series data. You will build a times series anomaly detector using Long Short-Term Memory (LSTM) and explain anomalies using OmniXAI’s SHAP explainer.

Chapter 5, Computer Vision Anomaly Explainability, integrates visual anomaly detection with XAI. This chapter covers various techniques for image-level and pixel-level anomaly detection. The example walk-through shows how to implement a visual anomaly detector and evaluate discriminative image regions identified by the model using a Class Activation Map (CAM) and Gradient-Weighted Class Activation Mapping (Grad-CAM).

Chapter 6, Differentiating Intrinsic versus Post Hoc Explainability, discusses intrinsic versus post hoc XAI methods at the local and global levels. The example walk-through further demonstrates the considerations when choosing either approach.

Chapter 7, Backpropagation versus Perturbation Explainability, reviews gradient-based backpropagation and perturbation-based XAI methods to determine feature importance for a model’s decision. This chapter has two example walk-throughs covering the saliency map and Local Interpretable Model-Agnostic Explanations (LIME).

Chapter 8, Model-Agnostic versus Model-Specific Explainability, evaluates how these two approaches work with example walk-throughs using Kernel SHAP and Guided Integrated Gradients (Guided IG). This chapter also outlines a working-backward methodology for choosing the model-agnostic versus the model-specific XAI method, starting with analyzing and understanding stakeholder and user needs.

Chapter 9, Explainability Evaluation Schemes, describes fundamental XAI principles recommended by the National Institute of Standards and Technology (NIST). This chapter reviews the existing XAI benchmarking landscape on how to quantify model explainability and assess the extent of interpretability.