An intuitive understanding of the SHAP and Shapley values
As discussed in Chapter 1, Foundational Concepts of Explainability Techniques, explaining black-box models is a necessity for increasing AI adoption. Algorithms that are model-agnostic and can provide local explainability with a global perspective are the ideal choice of explainability technique in machine learning (ML) . That is why LIME is a popular choice in XAI. SHAP is another popular choice of explainability technique in ML and, in certain scenarios, is more effective than LIME. In this section, we will discuss about the intuitive understanding of the SHAP framework along with how it functions for providing model explainability.
Introduction to SHAP and Shapley values
The SHAP framework was introduced by Scott Lundberg and Su-In Lee in their research work, A Unified Approach of Interpreting Model Predictions (https://arxiv.org/abs/1705.07874). This was published in 2017. SHAP is based on the concept of Shapley values...