What makes LIME a good model explainer?
LIME enables non-expert users to understand the working of untrustworthy black-box models. The following properties of LIME make it a good model explainer:
- Human interpretable: As discussed in the previous section, LIME provides explanations that are easy to understand, as it provides a qualitative way to compare the components of the input data with the model outcome.
- Model-agnostic: In the previous chapters, although you have learned about various model-specific explanation methods, it is always an advantage if the explanation method can be used to provide explainability for any black-box model. LIME does not make any assumptions about the model while providing the explanations and can work with any model.
- Local fidelity: LIME tries to replicate the behavior of the entire model by exploring the proximity of the data instance being predicted. So, it provides local explainability to the data instance being used for prediction...