Chapter 4: LIME for Model Interpretability
In the previous chapters, we discussed the various technical concepts of Explainable AI (XAI) that are needed to build trustworthy AI systems. Additionally, we looked at certain practical examples and demonstrations using various Python frameworks to implement the concepts of practical problem solving, which are given in the GitHub code repository of this chapter. XAI has been an important research topic for quite some time, but it is only very recently that all organizations have started to adopt XAI as a part of the solution life cycle for solving business problems using AI. One such popular approach is Local Interpretable Model-Agnostic Explanations (LIME), which has been widely adopted to provide model-agnostic local explainability. The LIME Python library is a robust framework that provides human-friendly explanations to tabular, text, and image data and helps in interpreting black-box supervised machine learning algorithms.
In this...