Book Image

Responsible AI in the Enterprise

By : Adnan Masood, Heather Dawe
5 (1)
Book Image

Responsible AI in the Enterprise

5 (1)
By: Adnan Masood, Heather Dawe

Overview of this book

Responsible AI in the Enterprise is a comprehensive guide to implementing ethical, transparent, and compliant AI systems in an organization. With a focus on understanding key concepts of machine learning models, this book equips you with techniques and algorithms to tackle complex issues such as bias, fairness, and model governance. Throughout the book, you’ll gain an understanding of FairLearn and InterpretML, along with Google What-If Tool, ML Fairness Gym, IBM AI 360 Fairness tool, and Aequitas. You’ll uncover various aspects of responsible AI, including model interpretability, monitoring and management of model drift, and compliance recommendations. You’ll gain practical insights into using AI governance tools to ensure fairness, bias mitigation, explainability, privacy compliance, and privacy in an enterprise setting. Additionally, you’ll explore interpretability toolkits and fairness measures offered by major cloud AI providers like IBM, Amazon, Google, and Microsoft, while discovering how to use FairLearn for fairness assessment and bias mitigation. You’ll also learn to build explainable models using global and local feature summary, local surrogate model, Shapley values, anchors, and counterfactual explanations. By the end of this book, you’ll be well-equipped with tools and techniques to create transparent and accountable machine learning models.
Table of Contents (16 chapters)
1
Part 1: Bigot in the Machine – A Primer
4
Part 2: Enterprise Risk Observability Model Governance
9
Part 3: Explainable AI in Action

What this book covers

This book is a comprehensive guide to responsible AI and machine learning model governance. It covers a broad range of topics including XAI, ethical AI, bias in AI systems, model interpretability, model governance and compliance, fairness and accountability in AI, data governance, and ethical AI education and upskilling. This book provides practical insight into using tools such as Microsoft FairLearn for fairness assessment and bias mitigation. It is a must-read for data scientists, machine learning engineers, AI practitioners, IT professionals, business stakeholders, and AI ethicists who are responsible for implementing AI models in their organizations. The content is presented in an easy-to-understand style, making it a valuable resource for professionals at all levels of expertise.

Chapter 1, Explainable and Ethical AI Primer, provides a comprehensive understanding of key concepts related to explainable and interpretable AI. You will become familiar with the terminology of safe, ethical, explainable, robust, transparent, auditable, and interpretable machine learning. This chapter serves as a solid foundation for novices as well as a reference for experienced machine learning practitioners. It starts with a discussion of the machine learning development life cycle and outlines the taxonomy of interpretable AI and model risk observability, providing a complete overview of the field.

Chapter 2, Algorithms Gone Wild, covers the current limitations and challenges of AI and how it can contribute to the amplification of existing biases. Despite these challenges, the chapter highlights the increasing use of AI and provides an overview of its various applications, including AI horror stories and cases of discrimination, bias, disinformation, fakes, social credit systems, surveillance, and scams. This chapter serves as a platform for discussion, bringing together the different uses of AI and offering a space for you to reflect on the potential consequences of its use. By the end of this chapter, you will have a deeper appreciation for the complex and nuanced nature of AI and the importance of considering its ethical and social implications.

Chapter 3, Opening the Algorithmic Black Box, teaches you about the field of XAI and its challenges, including a lack of formality and poorly defined definitions. The chapter provides an overview of four major categories of interpretability methods, which allow for a multi-perspective comparison of these methods. The purpose of this chapter is to explain black-box models and create white-box models, to ensure fairness and restrict discrimination, and to analyze the sensitivity of model predictions. The chapter will also show how to explain black-box models with white-box models and provide an understanding of the differential value proposition and approaches used in each of these libraries. By the end of this chapter, you will have a comprehensive understanding of the challenges and opportunities in the field of XAI, and the various interpretability methods available for creating more transparent and explainable machine learning models.

Chapter 4, Robust ML - Monitoring and Management, talks about the importance of ongoing validation and monitoring as an integral part of the model development life cycle. The chapter focuses on the process of model performance monitoring, beginning with quantifying the degradation of a model. You will learn about identifying the parameters to track the model’s performance and defining the thresholds that should raise an alert. The chapter focuses on the essential components of model performance monitoring, including maintaining the business purpose of a model and detecting drifts in its direction during and after deployment. You will learn how to leverage various techniques as part of model monitoring and build a process for detecting, alerting, and addressing drifts. The chapter aims to demonstrate the importance of automated monitoring of a model running in production, providing comprehensive measures for data drift monitoring, model concept drift monitoring, statistical performance monitoring, ethical fairness monitoring, business scenario simulation, what-if analysis, and comparing production parameters such as parallel model execution and custom metrics. By the end of this chapter, you will have a comprehensive understanding of the importance of ongoing validation and monitoring in the model development life cycle and the techniques for detecting and addressing drifts in a model’s performance.

Chapter 5, Model Governance, Audit, and Compliance, explores the predictive power of machine learning algorithms and their ability to take in vast amounts of data from a variety of sources. The chapter focuses on the governance aspect of these models, as there is growing concern about the lack of transparency in AI-driven decision-making processes. You will review various regulatory initiatives, including those by the United States Financial Services Commission and the U.S. Federal Trade Commission, concerning AI and machine learning. The chapter will cover different audit and compliance standards and the rapidly evolving regulation of AI, given its potential impact on people’s lives, livelihoods, healthcare, and financial systems. You will understand the importance of auditability in AI models with production traceability, including the availability of immutable snapshots of models for long-term auditability, along with their source code, metadata, and other associated artifacts. By the end of this chapter, you will have a comprehensive understanding of the governance aspect of machine learning models and the importance of ensuring transparency and accountability in AI-driven decision-making processes.

Chapter 6, Enterprise Starter Kit for Fairness, Accountability, and Transparency, demonstrates the importance of putting ethical AI principles into action as organizations adopt AI. The chapter provides a practical approach to using AI and appropriate tools to ensure AI fairness, bias mitigation, explainability, privacy compliance, and privacy in an enterprise setting. You will gain an understanding of how trust, fairness, and comprehensibility are the keys to responsible and accountable AI and how AI governance can be achieved in an enterprise setting with supporting tools. The chapter provides a walk-through of the implementation of bias mitigation and fairness, explainability, trust and transparency, and privacy and regulatory compliance within an organization. You will also review the variety of tools available for XAI, including the TensorBoard Projector, What-If Tool, Aequitas, AI Fairness 360, AI Explainability 360, ELI5, explainerdashboard, Fairlearn, interpret, Scikit-Fairness, InterpretML, tf-explain, XAI, AWS Clarify, and Vertex Explainable AI. By the end of this chapter, you will have a comprehensive understanding of how to use AI governance tools to ensure the responsible and accountable use of AI in an enterprise setting.

Chapter 7, Interpretability Toolkits and Fairness Measures - AWS, GCP, Azure, and AIF 360, showcases the use of interpretability toolkits and cloud AI providers’ offerings to identify and limit bias and explain predictions in machine learning models. The chapter will provide an overview of the open source and cloud-based interpretability toolkits available, including IBM’s AIF360, Amazon SageMaker’s Clarify, Google’s Vertex Explainable AI, and Model Interpretability in Azure Machine Learning. You will gain a deeper understanding of the variety of tools available for explainable AI and the benefits they provide in terms of greater visibility into training data and models. By the end of this chapter, you will have a comprehensive understanding of the role of interpretability toolkits in ensuring the fairness and transparency of machine learning models.

Chapter 8, Fairness in AI Systems with Microsoft Fairlearn, talks about Microsoft FairLearn, an open source fairness toolkit for AI. The chapter will provide an overview of the toolkit and its capabilities, including its use as a guide for data scientists to better understand fairness issues in AI. You will learn about the two components of Fairlearn Python, including metrics for assessing when groups are negatively impacted by a model and metrics for comparing multiple models. The chapter will cover the assessment of fairness using allocation harm and quality of service harm, as well as the mitigation of unfairness and approaches for improving an unfair model. By the end of this chapter, you will have a comprehensive understanding of Microsoft Fairlearn and its role in ensuring the fair and ethical use of machine learning models.

Chapter 9, Fairness Assessment and Bias Mitigation with Fairlearn and the Responsible AI Toolbox, explores the practical application of Fairlearn in real-world scenarios. The chapter covers the evaluation of fairness-related metrics and techniques for mitigating bias and disparity using Fairlearn. You will also learn about the Responsible AI Toolbox, which provides a collection of model and data exploration and assessment user interfaces and libraries for a better understanding of AI systems.

The chapter will introduce the Responsible AI Dashboard, Error Analysis Dashboard, Interpretability Dashboard, and Fairness Dashboard and how they can be used to identify model errors, diagnose why those errors are happening, understand model predictions, and assess the fairness of the model. By the end of this chapter, you will have a comprehensive understanding of how to use the Responsible AI Toolbox and Fairlearn to ensure the fair and ethical use of machine learning models in your own work.

Chapter 10, Foundational Models and Azure OpenAI, demonstrates the practical use cases of governance when it comes to LLMs – in this case, the API offerings of OpenAI and Azure OpenAI. The chapter covers the implementation of LLMs, such as GPT-3, which can be used for a variety of business use cases, and delves into the challenges associated with governing LLMs, such as data privacy and security. While these models can enhance the functionality of enterprise applications, they also pose significant challenges in terms of governance. The chapter highlights the importance of AI governance for the ethical and responsible use of LLMs and the need for bias remediation techniques to ensure that AI solutions are fair and unbiased. Additionally, we will discuss the data privacy and security measures provided by Azure OpenAI and the significance of establishing an AI governance framework for enterprise use of these tools.