Book Image

Amazon SageMaker Best Practices

By : Sireesha Muppala, Randy DeFauw, Shelbee Eigenbrode
Book Image

Amazon SageMaker Best Practices

By: Sireesha Muppala, Randy DeFauw, Shelbee Eigenbrode

Overview of this book

Amazon SageMaker is a fully managed AWS service that provides the ability to build, train, deploy, and monitor machine learning models. The book begins with a high-level overview of Amazon SageMaker capabilities that map to the various phases of the machine learning process to help set the right foundation. You'll learn efficient tactics to address data science challenges such as processing data at scale, data preparation, connecting to big data pipelines, identifying data bias, running A/B tests, and model explainability using Amazon SageMaker. As you advance, you'll understand how you can tackle the challenge of training at scale, including how to use large data sets while saving costs, monitoring training resources to identify bottlenecks, speeding up long training jobs, and tracking multiple models trained for a common goal. Moving ahead, you'll find out how you can integrate Amazon SageMaker with other AWS to build reliable, cost-optimized, and automated machine learning applications. In addition to this, you'll build ML pipelines integrated with MLOps principles and apply best practices to build secure and performant solutions. By the end of the book, you'll confidently be able to apply Amazon SageMaker's wide range of capabilities to the full spectrum of machine learning workflows.
Table of Contents (20 chapters)
Section 1: Processing Data at Scale
Section 2: Model Training Challenges
Section 3: Manage and Monitor Models
Section 4: Automate and Operationalize Machine Learning

Chapter 10: Optimizing Model Hosting and Inference Costs

The introduction of more powerful computers (notably with graphical processing units, or GPUs) and powerful machine learning (ML) frameworks such as TensorFlow has resulted in a generational leap in ML capabilities. As ML practitioners, our purview now includes optimizing the use of these new capabilities to maximize the value we get for the time and money we spend.

In this chapter, you'll learn how to use multiple deployment strategies to meet your training and inference requirements. You'll learn when to get and store inferences in advance versus getting them on demand, how to scale inference services to meet fluctuating demand, and how to use multiple models for model testing.  

In this chapter, we will cover the following topics:

  • Real-time inference versus batch inference
  • Deploying multiple models behind a single inference endpoint
  • Scaling inference endpoints to meet inference traffic...