Explaining deep learning models using DeepExplainer and GradientExplainer
In the previous section, we covered the use of TreeExplainer in SHAP, which is a model-specific explainability method only applicable to tree ensemble models. We will now discuss GradientExplainer and DeepExplainer, two other model-specific explainers in SHAP that are mostly used with deep learning models.
GradientExplainer
As discussed in Chapter 2, Model Explainability Methods, one of the most widely adopted ways to explain deep learning models trained on unstructured data such as images is layer-wise relevance propagation (LRP). LRP is about analyzing the gradient flow for the intermediate layers of the deep neural network. SHAP GradientExplainers also function in a similar way. As discussed in Chapter 6, Model Interpretability Using SHAP, GradientExplainer combines the idea of SHAP, integrated gradients, and SmoothGrad into a single expected value equation. GradientExplainer finally uses a sensitivity...