Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Platform and Model Design for Responsible AI
  • Table Of Contents Toc
Platform and Model Design for Responsible AI

Platform and Model Design for Responsible AI

By : Amita Kapoor, Sharmistha Chatterjee
4.8 (36)
close
close
Platform and Model Design for Responsible AI

Platform and Model Design for Responsible AI

4.8 (36)
By: Amita Kapoor, Sharmistha Chatterjee

Overview of this book

AI algorithms are ubiquitous and used for tasks, from recruiting to deciding who will get a loan. With such widespread use of AI in the decision-making process, it’s necessary to build an explainable, responsible, transparent, and trustworthy AI-enabled system. With Platform and Model Design for Responsible AI, you’ll be able to make existing black box models transparent. You’ll be able to identify and eliminate bias in your models, deal with uncertainty arising from both data and model limitations, and provide a responsible AI solution. You’ll start by designing ethical models for traditional and deep learning ML models, as well as deploying them in a sustainable production setup. After that, you’ll learn how to set up data pipelines, validate datasets, and set up component microservices in a secure and private way in any cloud-agnostic framework. You’ll then build a fair and private ML model with proper constraints, tune the hyperparameters, and evaluate the model metrics. By the end of this book, you’ll know the best practices to comply with data privacy and ethics laws, in addition to the techniques needed for data anonymization. You’ll be able to develop models with explainability, store them in feature stores, and handle uncertainty in model predictions.
Table of Contents (21 chapters)
close
close
1
Part 1: Risk Assessment Machine Learning Frameworks in a Global Landscape
5
Part 2: Building Blocks and Patterns for a Next-Generation AI Ecosystem
9
Part 3: Design Patterns for Model Optimization and Life Cycle Management
14
Part 4: Implementing an Organization Strategy, Best Practices, and Use Cases

The Emergence of Risk-Averse Methodologies and Frameworks

This chapter gives a detailed overview of defining and architecting ML defense frameworks that can protect data, ML models, and other necessary artifacts at different stages of ML training and evaluation pipelines. In this chapter, you will learn about different anonymization, encryption, and application-level privacy techniques, as well as hybrid security measures, that serve as the basis of ML model development for both centralized and distributed learning. In addition, you will also discover scenario-based defense techniques that can be applied to safeguard data and models to solve practical industry-grade ML use cases. The primary objective of this chapter is to explain the application of commonly used defense tools, libraries, and metrics available for large-scale ML SaaS platforms.

In this chapter, these topics will be covered in the following sections:

  • Threat matrix and defense techniques
  • Anonymization...
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Platform and Model Design for Responsible AI
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon