Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

To get the most out of this book

To run the code tutorials provided in this book, you will need a Jupyter environment with Python 3.6+. This can be achieved in either of the following ways:

  • Install one on your machine locally via Anaconda Navigator or from scratch with pip.
  • Use a cloud-based environment such as Google Colaboratory, Kaggle notebooks, Azure notebooks, or Amazon SageMaker.

You can take a look at the supplementary information provided at the code repository if you are new to Jupyter notebooks: https://github.com/PacktPublishing/Applied-Machine-Learning-Explainability-Techniques/blob/main/SupplementaryInfo/CodeSetup.md.

You can also take a look at https://github.com/PacktPublishing/Applied-Machine-Learning-Explainability-Techniques/blob/main/SupplementaryInfo/PythonPackageInfo.md and https://github.com/PacktPublishing/Applied-Machine-Learning-Explainability-Techniques/blob/main/SupplementaryInfo/DatasetInfo.md for getting the supplementary information about the Python packages and datasets used in the tutorial notebooks.

For instructions on installing the Python packages used throughout the book, please refer the specific notebook provided in the code repository. For any additional help needed, please refer the original project repository of the specific package. You can use PyPi (https://pypi.org/) and search for the specific package and navigate to the code repository of the project. It is expected that installation or execution instructions of these packages can change from time to time, given how often packages change. We also tested the code with specific versions detailed in the Python package information README file under the supplementary information provided at the code repository. So, if anything doesn't work as expected with the later versions, please install the specific version mentioned in the README instead.

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book's GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

For beginners without any exposure to ML or data science, it is recommended to read the book sequentially as many important concepts are explained in sufficient detail in the earlier chapters. Seasoned ML or data science experts who are relatively new to the field of XAI can skim through the first three chapters to get clear conceptual understanding of various terminology used. For chapters four to nine, any order should be fine for seasoned experts. For all level of practitioners, it is recommended that you read chapter 10 and 11 only after covering all the nine chapters.

Regarding the code provided, it is recommended that you either read each chapter and then run the corresponding code, or you can run the code simultaneously while reading the specific chapters. Sufficient theory is also added in the Jupyter notebooks to help you understand the overall flow of the notebook.

When you are reading the book, it is recommended that you take notes of the important terminologies covered and try to think of ways in which you could apply the concept or the framework learned. After reading the book and going through all the Jupyter notebooks, hopefully, you will be inspired to apply the newly gained knowledge into action!