Chapter 1: Foundational Concepts of Explainability Techniques
As more and more organizations have started to adopt Artificial Intelligence (AI) and Machine Learning (ML) for their critical business decision-making process, it becomes an immediate expectation to interpret and demystify black-box algorithms to increase their adoption. AI and ML are being increasingly utilized for determining our day-to-day experiences across multiple areas, such as banking, healthcare, education, recruitment, transport, and supply chain. But the integral role played by AI and ML models has led to the growing concern of business stakeholders and consumers about the lack of transparency and interpretability as these black-box algorithms are highly subjected to human bias; particularly for high-stake domains, such as healthcare, finance, legal, and other critical industrial operations, model explainability is a prerequisite.
As the benefits of AI and ML can be significant, the question is, how can we increase its adoption despite the growing concerns? Can we even address these concerns and democratize the use of AI and ML? And how can we make AI more explainable for critical industrial applications in which black-box models are not trusted? Throughout this book, we will try to learn the answers to these questions and apply these concepts and ideas to solve practical problems!
In this chapter, you will learn about the foundational concepts of Explainable AI (XAI) so that the terms and concepts used in future chapters are clear, and it will be easier to follow and implement some of the advanced explainability techniques discussed later in this book. This will give you the required theoretical knowledge needed to understand and implement the practical techniques discussed in later chapters. The chapter focuses on the following main topics:
- Introduction to XAI
- Defining explanation methods and approaches
- Evaluating the quality of explainability methods
Now, let's get started!