Potential pitfalls
In the previous section, we learned how easily the LIME Python framework can be used to explain black-box models for a classification problem. But unfortunately, the algorithm does have certain limitations, and there are a few scenarios in which the algorithm is not effective:
- While providing interpretable explanations, a particular choice of interpretable data representation and interpretable model might still have a lot of limitations. While the underlying trained model might still be considered a black-box model, there is no assumption about the model that is made during the explanation process. However, certain representations are not powerful enough to represent some complex behaviors of the model. For example, if we are trying to build an image classifier to distinguish between black and white images and colored images, then the presence or absence of superpixels will not be useful to provide the explanations.
- As discussed earlier, LIME learns...