Summary
After reading this chapter, you have received some practical exposure to using SHAP with tabular structured data as well as unstructured data such as images and texts. We have discussed the different explainers available in SHAP for both model-specific and model-agnostic explainability. We have applied SHAP to explain linear models, tree ensemble models, convolution neural network models, and even transformer models in this chapter. Using SHAP, we can explain different types of models trained on different types of data. I highly recommend trying out the end-to-end tutorials provided in the GitHub code repository and exploring things in more depth to acquire deeper practical knowledge.
In the next chapter, we will discuss another interesting topic of concept activation vectors and explore the practical part of applying the Testing with Concept Activation Vectors (TCAV) framework from Google AI for explaining models with human-friendly concepts.