Book Image

Amazon SageMaker Best Practices

By : Sireesha Muppala, Randy DeFauw, Shelbee Eigenbrode
Book Image

Amazon SageMaker Best Practices

By: Sireesha Muppala, Randy DeFauw, Shelbee Eigenbrode

Overview of this book

Amazon SageMaker is a fully managed AWS service that provides the ability to build, train, deploy, and monitor machine learning models. The book begins with a high-level overview of Amazon SageMaker capabilities that map to the various phases of the machine learning process to help set the right foundation. You'll learn efficient tactics to address data science challenges such as processing data at scale, data preparation, connecting to big data pipelines, identifying data bias, running A/B tests, and model explainability using Amazon SageMaker. As you advance, you'll understand how you can tackle the challenge of training at scale, including how to use large data sets while saving costs, monitoring training resources to identify bottlenecks, speeding up long training jobs, and tracking multiple models trained for a common goal. Moving ahead, you'll find out how you can integrate Amazon SageMaker with other AWS to build reliable, cost-optimized, and automated machine learning applications. In addition to this, you'll build ML pipelines integrated with MLOps principles and apply best practices to build secure and performant solutions. By the end of the book, you'll confidently be able to apply Amazon SageMaker's wide range of capabilities to the full spectrum of machine learning workflows.
Table of Contents (20 chapters)
1
Section 1: Processing Data at Scale
7
Section 2: Model Training Challenges
10
Section 3: Manage and Monitor Models
15
Section 4: Automate and Operationalize Machine Learning

Optimizing models with SageMaker Neo

In the previous section, we saw how Elastic Inference can reduce inference costs for deep learning models. Similarly, SageMaker Neo lets you improve inference performance and reduce costs by compiling trained ML models for better performance on specific platforms. While that will help in general, it's particularly effective when you are trying to run inference on low-powered edge devices.  

In order to use SageMaker Neo, you simply start a compilation job with a trained model in a supported framework. When the compilation job completes, you can deploy the artifact to a SageMaker endpoint or to an edge device using the Greengrass IoT platform.

The Model optimization with SageMaker Neo section in the notebook demonstrates how to compile our XGBoost model for use on a hosted endpoint:

  1. First, we need to get the length (number of features) of an input record:
    ncols = len(t_lines[0].split(','))
  2. Now, we'll...