Book Image

Getting Started with Amazon SageMaker Studio

By : Michael Hsieh
Book Image

Getting Started with Amazon SageMaker Studio

By: Michael Hsieh

Overview of this book

Amazon SageMaker Studio is the first integrated development environment (IDE) for machine learning (ML) and is designed to integrate ML workflows: data preparation, feature engineering, statistical bias detection, automated machine learning (AutoML), training, hosting, ML explainability, monitoring, and MLOps in one environment. In this book, you'll start by exploring the features available in Amazon SageMaker Studio to analyze data, develop ML models, and productionize models to meet your goals. As you progress, you will learn how these features work together to address common challenges when building ML models in production. After that, you'll understand how to effectively scale and operationalize the ML life cycle using SageMaker Studio. By the end of this book, you'll have learned ML best practices regarding Amazon SageMaker Studio, as well as being able to improve productivity in the ML development life cycle and build and deploy models easily for your ML use cases.
Table of Contents (16 chapters)
1
Part 1 – Introduction to Machine Learning on Amazon SageMaker Studio
4
Part 2 – End-to-End Machine Learning Life Cycle with SageMaker Studio
11
Part 3 – The Production and Operation of Machine Learning with SageMaker Studio

Explaining ML models using SHAP values

SageMaker Clarify also computes model-agnostic feature attribution based on the concept of Shapley values. Shapley values can be used to determine the contribution each feature makes to model predictions. Feature attribution helps explain how a model makes decisions. Having a quantifiable approach to describe how a model makes decisions enables us to have trust in an ML model that meets regulatory requirements and supports the human decision-making process.

Similar to setting up configurations to run bias analysis jobs using SageMaker Clarify, it takes three configurations to set up a model explainability job: a data configuration, a model configuration, and an explainability configuration. Let's follow the next steps from the same notebook:

  1. Create a data configuration with the training dataset (matched). This is similar to the data configurations we created before. The code is illustrated in the following snippet:
    explainability_data_config...