Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Deep Learning and XAI Techniques for Anomaly Detection
  • Table Of Contents Toc
Deep Learning and XAI Techniques for Anomaly Detection

Deep Learning and XAI Techniques for Anomaly Detection

By : Cher Simon
4.8 (13)
close
close
Deep Learning and XAI Techniques for Anomaly Detection

Deep Learning and XAI Techniques for Anomaly Detection

4.8 (13)
By: Cher Simon

Overview of this book

Despite promising advances, the opaque nature of deep learning models makes it difficult to interpret them, which is a drawback in terms of their practical deployment and regulatory compliance. Deep Learning and XAI Techniques for Anomaly Detection shows you state-of-the-art methods that’ll help you to understand and address these challenges. By leveraging the Explainable AI (XAI) and deep learning techniques described in this book, you’ll discover how to successfully extract business-critical insights while ensuring fair and ethical analysis. This practical guide will provide you with tools and best practices to achieve transparency and interpretability with deep learning models, ultimately establishing trust in your anomaly detection applications. Throughout the chapters, you’ll get equipped with XAI and anomaly detection knowledge that’ll enable you to embark on a series of real-world projects. Whether you are building computer vision, natural language processing, or time series models, you’ll learn how to quantify and assess their explainability. By the end of this deep learning book, you’ll be able to build a variety of deep learning XAI models and perform validation to assess their explainability.
Table of Contents (15 chapters)
close
close
1
Part 1 – Introduction to Explainable Deep Learning Anomaly Detection
4
Part 2 – Building an Explainable Deep Learning Anomaly Detector
8
Part 3 – Evaluating an Explainable Deep Learning Anomaly Detector

Backpropagation versus Perturbation Explainability

Researchers discovered that the human brain can process and interpret images the eye sees in 13 milliseconds. The human visual process begins when light reaches the retina, which converts light into neural signals for sending information, such as shape, hue, and orientation, to the brain. Based on past experiences, the human brain extracts information that influences visual perception.

Figure 7.1 shows how the computer sees images in a grid of pixel values ranging between 0 and 255 without inherent knowledge of shapes and colors. Deep neural networks, such as convolutional neural networks (CNNs), resemble the human brain, containing nonlinear structures and layers of artificial neurons to learn features from the input image.

Figure 7.1 – Human visual perception versus computer vision

Figure 7.1 – Human visual perception versus computer vision

Despite promising results in recent years, these state-of-the-art models have yet to gain broad adoption due to...

CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Deep Learning and XAI Techniques for Anomaly Detection
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon