InterpretML
InterpretML (https://interpret.ml/) is an XAI toolkit from Microsoft. It aims to provide a comprehensive understanding of ML models for the purpose of model debugging, outcome explainability, and regulatory audits of ML models. With this Python module, we can either train interpretable glassbox models or explain black-box models.
In Chapter 1, Foundational Concepts of Explainability Techniques, we discovered that some models such as decision trees, linear models, or rule-fit algorithms are inherently explainable. However, these models are not efficient for complex datasets. Usually, these models are termed glass-box models as opposed to black-box models, as they are extremely transparent.
Microsoft Research developed another algorithm called Explainable Boosting Machine (EBM), which introduces modern ML techniques such as boosting, bagging, and automatic interaction detection into classical algorithms such as Generalized Additive Models (GAMs). Researchers have...