Intuitive understanding of LIME
LIME is a novel, model-agnostic, local explanation technique used for interpreting black-box models by learning a local model around the predictions. LIME provides an intuitive global understanding of the model, which is helpful for non-expert users, too. The technique was first proposed in the research paper "Why Should I Trust You?" Explaining the Predictions of Any Classifier by Ribeiro et al. (https://arxiv.org/abs/1602.04938). The Python library can be installed from the GitHub repository at https://github.com/marcotcr/lime. The algorithm does a pretty good job of interpreting any classifier or regressor in faithful ways by using approximated local interpretable models. It provides a global perspective to establish trust for any black-box model; therefore, it allows you to identify interpretable models over human-interpretable representation, which is locally faithful to the algorithm. So, it mainly functions by learning interpretable...