Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

Applying TreeExplainers to tree ensemble models

As discussed in the previous chapter, the Tree SHAP implementation can work with tree ensemble models such as Random Forests, XGBoost, and LightGBM algorithms. Now, decision trees are inherently interpretable. But tree-based ensemble learning models, either implementing boosting or bagging, are not inherently interpretable and can be quite complex to interpret. So, SHAP is one of the popular choices of algorithms used to explain such complex models. The Kernel SHAP implementation of SHAP is model-agnostic and can explain any model. However, the algorithm can be really slow with larger datasets with many features. That is why the Tree SHAP (https://arxiv.org/abs/1802.03888) implementation of the algorithm is a high-speed exact algorithm for tree ensemble models. TreeExplainer is the fast C++ implementation of the Tree SHAP algorithm, which supports algorithms such as XGBoost, CatBoost, LightGBM, and other tree ensemble models from scikit...