Book Image

Platform and Model Design for Responsible AI

By : Amita Kapoor, Sharmistha Chatterjee
Book Image

Platform and Model Design for Responsible AI

By: Amita Kapoor, Sharmistha Chatterjee

Overview of this book

AI algorithms are ubiquitous and used for tasks, from recruiting to deciding who will get a loan. With such widespread use of AI in the decision-making process, it’s necessary to build an explainable, responsible, transparent, and trustworthy AI-enabled system. With Platform and Model Design for Responsible AI, you’ll be able to make existing black box models transparent. You’ll be able to identify and eliminate bias in your models, deal with uncertainty arising from both data and model limitations, and provide a responsible AI solution. You’ll start by designing ethical models for traditional and deep learning ML models, as well as deploying them in a sustainable production setup. After that, you’ll learn how to set up data pipelines, validate datasets, and set up component microservices in a secure and private way in any cloud-agnostic framework. You’ll then build a fair and private ML model with proper constraints, tune the hyperparameters, and evaluate the model metrics. By the end of this book, you’ll know the best practices to comply with data privacy and ethics laws, in addition to the techniques needed for data anonymization. You’ll be able to develop models with explainability, store them in feature stores, and handle uncertainty in model predictions.
Table of Contents (21 chapters)
1
Part 1: Risk Assessment Machine Learning Frameworks in a Global Landscape
5
Part 2: Building Blocks and Patterns for a Next-Generation AI Ecosystem
9
Part 3: Design Patterns for Model Optimization and Life Cycle Management
14
Part 4: Implementing an Organization Strategy, Best Practices, and Use Cases

Understanding model explainability during concept drift/calibration

In the preceding section, we learned about different types of concept drift. Now, let us study how we can explain them with interpretable ML:

  1. First, we import the necessary packages for creating a regression model and the drift explainer library. The California Housing dataset has been used to explain concept drift:
    from xgboost import XGBRegressor
    from cinnamon.drift import ModelDriftExplainer, AdversarialDriftExplainer
    from sklearn import datasets
    from sklearn.datasets import fetch_california_housing
    from sklearn.datasets import fetch_openml
    california = fetch_openml(name="house_prices", as_frame=True)
    california_df = pd.DataFrame(california.data, columns=california.feature_names)
    RANDOM_SEED = 2021
  2. Then, we train the XGBoost regressor model:
    model = XGBRegressor(n_estimators=1000, booster="gbtree",objective="reg:squarederror", learning_rate=0.05,max_depth=6,seed=RANDOM_SEED...