Model-agnostic explainability using KernelExplainer
In the previous sections, we have discussed three model-specific explainers available in SHAP – TreeExplainer, GradientExplainer, and DeepExplainer. The KernelExplainer in SHAP actually makes SHAP a model-agnostic explainability approach. However, unlike the previous methods, KernelExplainer based on the Kernel SHAP algorithm is much slower, especially for large and high dimensional datasets. Kernel SHAP tries to combine ideas from Shapley values and Local Interpretable Model-agnostic Explanations (LIME) for both global and local interpretability of black-box models. Similar to the approach followed in LIME, the Kernel SHAP algorithm also creates locally linear perturbed samples and computes Shapley values of the same to identify features contributing to or against the model prediction.
KernelExplainer is the practical implementation of the Kernel SHAP algorithm. The complete tutorial demonstrating the application of SHAP...