Applying TreeExplainers to tree ensemble models
As discussed in the previous chapter, the Tree SHAP implementation can work with tree ensemble models such as Random Forests, XGBoost, and LightGBM algorithms. Now, decision trees are inherently interpretable. But tree-based ensemble learning models, either implementing boosting or bagging, are not inherently interpretable and can be quite complex to interpret. So, SHAP is one of the popular choices of algorithms used to explain such complex models. The Kernel SHAP implementation of SHAP is model-agnostic and can explain any model. However, the algorithm can be really slow with larger datasets with many features. That is why the Tree SHAP (https://arxiv.org/abs/1802.03888) implementation of the algorithm is a high-speed exact algorithm for tree ensemble models. TreeExplainer is the fast C++ implementation of the Tree SHAP algorithm, which supports algorithms such as XGBoost, CatBoost, LightGBM, and other tree ensemble models from scikit...