Book Image

Applied Machine Learning Explainability Techniques

By : Aditya Bhattacharya
Book Image

Applied Machine Learning Explainability Techniques

By: Aditya Bhattacharya

Overview of this book

Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases. Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You'll begin by gaining a conceptual understanding of XAI and why it's so important in AI. Next, you'll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you'll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users. By the end of this ML book, you'll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
Table of Contents (16 chapters)
1
Section 1 – Conceptual Exposure
5
Section 2 – Practical Problem Solving
12
Section 3 –Taking XAI to the Next Level

LIME for production-level systems

The short answer to the question posted toward the end of the last section is Yes. LIME can definitely be scaled for use in production-level systems due to the following main reasons:

  • Minimal implementation complexity: The API structure of the LIME Python framework is concise and well structured. This allows us to add model explainability in just a few lines of code. For providing local explainability to inference data instances, the runtime complexity of the LIME algorithm is very low and, hence, this approach can also work for real-time applications.
  • Easy integration with other software applications: The API structure of the framework is modular. For consuming the explainability results, we do not need to solely depend on the in-built visualizations provided by the framework. We can utilize the raw explainability results and create our own custom visualization dashboards or reports. Also, we can create custom web API methods and host...