Book Image

Responsible AI in the Enterprise

By : Adnan Masood, Heather Dawe
5 (1)
Book Image

Responsible AI in the Enterprise

5 (1)
By: Adnan Masood, Heather Dawe

Overview of this book

Responsible AI in the Enterprise is a comprehensive guide to implementing ethical, transparent, and compliant AI systems in an organization. With a focus on understanding key concepts of machine learning models, this book equips you with techniques and algorithms to tackle complex issues such as bias, fairness, and model governance. Throughout the book, you’ll gain an understanding of FairLearn and InterpretML, along with Google What-If Tool, ML Fairness Gym, IBM AI 360 Fairness tool, and Aequitas. You’ll uncover various aspects of responsible AI, including model interpretability, monitoring and management of model drift, and compliance recommendations. You’ll gain practical insights into using AI governance tools to ensure fairness, bias mitigation, explainability, privacy compliance, and privacy in an enterprise setting. Additionally, you’ll explore interpretability toolkits and fairness measures offered by major cloud AI providers like IBM, Amazon, Google, and Microsoft, while discovering how to use FairLearn for fairness assessment and bias mitigation. You’ll also learn to build explainable models using global and local feature summary, local surrogate model, Shapley values, anchors, and counterfactual explanations. By the end of this book, you’ll be well-equipped with tools and techniques to create transparent and accountable machine learning models.
Table of Contents (16 chapters)
Part 1: Bigot in the Machine – A Primer
Part 2: Enterprise Risk Observability Model Governance
Part 3: Explainable AI in Action


As practicing data scientists, we have seen first-hand how AI models play a significant role in various aspects of our lives. However, as the cliche goes, with this power comes the responsibility to ensure that these decision-making systems are fair, transparent, and trustworthy. That’s why I, along with my colleague, decided to write this book.

We have observed that many companies face challenges when it comes to the governance and auditing of machine learning systems. One major issue is bias, which can lead to unfair outcomes. Another issue is the lack of interpretability, making it difficult to know whether the models are functioning correctly. Finally, there’s the challenge of explaining AI decisions to humans, which can lead to a lack of trust in these systems.

Controlling frameworks and standards (in the form of government regulation, ISO standards, and similar) for AI that ensure it is fair, ethical, and fit for the purpose of its application are still in their nascent form and have only started to become available within the past few years. This can be viewed as surprising given AI’s growing ubiquity in our lives. As these frameworks become published and used, AI assurance will itself mature and hopefully become as ubiquitous as AI. Until then, we hope this book fills the gaps that data professionals within the enterprise are facing as they seek to ensure the AI they develop and use is fair, ethical, and fit for purpose.

With these challenges and intentions in mind, we aimed to write a book that fits the following criteria:

  • Does not repeat information that is already widely available
  • Is accessible to business and subject-matter experts who are interested in learning about explainable and interpretable AI
  • Provides practical guidance, including checklists and resources, to help companies get started with explainable AI

We’ve kept the technical language to a minimum and made the book easy to understand so that it can be used as a resource for professionals at all levels of experience.

As AI continues to evolve, it’s important for companies to have a clear understanding of how these systems work and to be able to explain their algorithmic value propositions. This is not just a matter of complying with regulations but also about building trust with customers and stakeholders.

This book is for business stakeholders, technical leaders, regulators, and anyone interested in the responsible use of AI. We cover a range of topics, including explainable AI, algorithmic bias, trust in AI systems, and the use of various tools for fairness assessment and bias mitigation. We also discuss the role of model monitoring and governance in ensuring the reliability and transparency of AI systems.

Given the increasing importance of responsible AI practices, this book is particularly relevant in light of current AI standards and guidelines, such as the EU’s GDPR, the AI Now Institute’s Algorithmic Impact Assessment, and the Partnership on AI’s Principles for Responsible AI. Our hope is that by exploring these critical issues and sharing best practices, we can help you understand the importance of responsible AI and inspire you to take action to ensure that AI is used for the betterment of all.

  1. Exploring the Landscape of Explainable AI and Bias: Chapters 1 and 2 provide an introduction to Explainable AI (XAI), which is a crucial component in the development and deployment of AI models. This section provides a comprehensive overview of the XAI landscape, its importance, and the challenges that it poses. The section starts with a primer on XAI and ethical AI for model risk management, providing definitions and concepts that you will need to understand for the rest of the section. Next, you will be presented with several harrowing tales of AI gone bad, highlighting the dangers of unexplainable and biased AI. These stories illustrate the importance of XAI and the need for different approaches to be taken to address similar problems. Chapter 2, Algorithms Gone Wild, takes a closer look at bias and the impact it has on AI models. The chapter explores the different types of bias that can be introduced into models and the impact they have on the outcomes produced. By the end of this introduction, you will have a deeper understanding of XAI and the challenges it poses, as well as a greater appreciation for the importance of ethical AI and the need to address bias in AI models.
  2. Exploring Explainability, Risk Observability, and Model Governance: Chapters 3 to 6 delve into the topics of explainability, risk observability, and model governance, particularly in the context of cloud computing platforms such as Microsoft Azure, Amazon Web Services, and Google Cloud. It covers several important areas, including model interpretability approaches, measuring and monitoring model drift, audit and compliance standards, enterprise starter kit for fairness, accountability, and transparency, as well as bias removal, model robustness, and adversarial attacks. These topics are discussed in detail across several chapters to provide you with a comprehensive understanding of these important concepts.
  3. Applied Explainable AI: Real-world Scenarios and Case Studies: Chapters 7 to 10 of the final section delves into the practical application of explainable AI and the challenges of deploying trustworthy and interpretable models in the enterprise. Real-world case studies and usage scenarios are presented to illustrate the need for safe, ethical, and explainable machine learning, and provide solutions to problems encountered in various domains. The chapters in this section explore code examples, toolkits, and solutions offered by cloud platforms such as AWS, GCP, and Azure, Microsoft’s FairLearn framework, and Azure OpenAI Large Language Models (LLMs) such as GPT-3, GPT-4, and ChatGPT. Specific topics covered in this section include interpretability toolkits, fairness measures, fairness in AI systems, and bias mitigation strategies. We will also review a real-world implementation of GPT3, along with recommendations and guidelines for using LLMs in a safe and responsible manner.