Book Image

Deep Learning and XAI Techniques for Anomaly Detection

By : Cher Simon
Book Image

Deep Learning and XAI Techniques for Anomaly Detection

By: Cher Simon

Overview of this book

Despite promising advances, the opaque nature of deep learning models makes it difficult to interpret them, which is a drawback in terms of their practical deployment and regulatory compliance. Deep Learning and XAI Techniques for Anomaly Detection shows you state-of-the-art methods that’ll help you to understand and address these challenges. By leveraging the Explainable AI (XAI) and deep learning techniques described in this book, you’ll discover how to successfully extract business-critical insights while ensuring fair and ethical analysis. This practical guide will provide you with tools and best practices to achieve transparency and interpretability with deep learning models, ultimately establishing trust in your anomaly detection applications. Throughout the chapters, you’ll get equipped with XAI and anomaly detection knowledge that’ll enable you to embark on a series of real-world projects. Whether you are building computer vision, natural language processing, or time series models, you’ll learn how to quantify and assess their explainability. By the end of this deep learning book, you’ll be able to build a variety of deep learning XAI models and perform validation to assess their explainability.
Table of Contents (15 chapters)
1
Part 1 – Introduction to Explainable Deep Learning Anomaly Detection
4
Part 2 – Building an Explainable Deep Learning Anomaly Detector
8
Part 3 – Evaluating an Explainable Deep Learning Anomaly Detector

Understanding the basics of XAI

AI systems extract patterns from input data and derive perceptions based on trained knowledge. The increased use of AI systems and applications in our everyday lives has prompted a growing demand for interpretability and accountability to justify model outputs for broader AI adoption. Unlike traditional human-made, rule-based systems that are self-explanatory, many deep learning algorithms are inherently opaque and overly complex for human interpretability.

XAI is a multidisciplinary field that spans psychology, computer science, and engineering, as pictured in Figure 2.1. Applying algorithmic, psychology, and cognitive science concepts, XAI aims to provide explainable AI decisions, allowing users without ML backgrounds to comprehend model behavior.

Figure 2.1 – XAI as a multidisciplinary field

Figure 2.1 – XAI as a multidisciplinary field

DARPA’s initial focus was to evaluate XAI in two problem areas – that is, data analytics and autonomy. They...