Book Image

Responsible AI in the Enterprise

By : Adnan Masood, Heather Dawe
5 (1)
Book Image

Responsible AI in the Enterprise

5 (1)
By: Adnan Masood, Heather Dawe

Overview of this book

Responsible AI in the Enterprise is a comprehensive guide to implementing ethical, transparent, and compliant AI systems in an organization. With a focus on understanding key concepts of machine learning models, this book equips you with techniques and algorithms to tackle complex issues such as bias, fairness, and model governance. Throughout the book, you’ll gain an understanding of FairLearn and InterpretML, along with Google What-If Tool, ML Fairness Gym, IBM AI 360 Fairness tool, and Aequitas. You’ll uncover various aspects of responsible AI, including model interpretability, monitoring and management of model drift, and compliance recommendations. You’ll gain practical insights into using AI governance tools to ensure fairness, bias mitigation, explainability, privacy compliance, and privacy in an enterprise setting. Additionally, you’ll explore interpretability toolkits and fairness measures offered by major cloud AI providers like IBM, Amazon, Google, and Microsoft, while discovering how to use FairLearn for fairness assessment and bias mitigation. You’ll also learn to build explainable models using global and local feature summary, local surrogate model, Shapley values, anchors, and counterfactual explanations. By the end of this book, you’ll be well-equipped with tools and techniques to create transparent and accountable machine learning models.
Table of Contents (16 chapters)
1
Part 1: Bigot in the Machine – A Primer
4
Part 2: Enterprise Risk Observability Model Governance
9
Part 3: Explainable AI in Action

Open source toolkits and lenses

In this section, we will look into two prominent tools for addressing bias and fairness in AI systems: IBM AIF360 and Aequitas – a Bias and Fairness Audit Toolkit.

IBM AI Fairness 360

IBM AI Fairness 360 is an open source toolkit designed to help developers and data scientists identify, understand, and mitigate biases in ML models. The toolkit offers a range of algorithms7 for detecting and removing bias, as well as metrics for evaluating model fairness. The toolkit’s suite of visualizations provides users with an intuitive way to interpret results and gain insights into the underlying causes of bias:

Figure 7.10: IBM AI Fairness 360 home page

Figure 7.10: IBM AI Fairness 360 home page

By using the IBM AI Fairness 360 toolkit8, developers can enhance the transparency and accountability of their ML models. The toolkit allows users to detect and address bias at every stage of the model development process, from data preprocessing to model deployment...