Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Applied Machine Learning Explainability Techniques
  • Table Of Contents Toc
Applied Machine Learning Explainability Techniques

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
4.9 (27)
close
close
Applied Machine Learning Explainability Techniques

Applied Machine Learning Explainability Techniques

4.9 (27)
By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
close
close
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Explaining deep learning models using DeepExplainer and GradientExplainer

In the previous section, we covered the use of TreeExplainer in SHAP, which is a model-specific explainability method only applicable to tree ensemble models. We will now discuss GradientExplainer and DeepExplainer, two other model-specific explainers in SHAP that are mostly used with deep learning models.

GradientExplainer

As discussed in Chapter 2, Model Explainability Methods, one of the most widely adopted ways to explain deep learning models trained on unstructured data such as images is layer-wise relevance propagation (LRP). LRP is about analyzing the gradient flow for the intermediate layers of the deep neural network. SHAP GradientExplainers also function in a similar way. As discussed in Chapter 6, Model Interpretability Using SHAP, GradientExplainer combines the idea of SHAP, integrated gradients, and SmoothGrad into a single expected value equation. GradientExplainer finally uses a sensitivity...

CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Applied Machine Learning Explainability Techniques
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon