Book Image

Amazon SageMaker Best Practices

By : Sireesha Muppala, Randy DeFauw, Shelbee Eigenbrode
Book Image

Amazon SageMaker Best Practices

By: Sireesha Muppala, Randy DeFauw, Shelbee Eigenbrode

Overview of this book

Amazon SageMaker is a fully managed AWS service that provides the ability to build, train, deploy, and monitor machine learning models. The book begins with a high-level overview of Amazon SageMaker capabilities that map to the various phases of the machine learning process to help set the right foundation. You'll learn efficient tactics to address data science challenges such as processing data at scale, data preparation, connecting to big data pipelines, identifying data bias, running A/B tests, and model explainability using Amazon SageMaker. As you advance, you'll understand how you can tackle the challenge of training at scale, including how to use large data sets while saving costs, monitoring training resources to identify bottlenecks, speeding up long training jobs, and tracking multiple models trained for a common goal. Moving ahead, you'll find out how you can integrate Amazon SageMaker with other AWS to build reliable, cost-optimized, and automated machine learning applications. In addition to this, you'll build ML pipelines integrated with MLOps principles and apply best practices to build secure and performant solutions. By the end of the book, you'll confidently be able to apply Amazon SageMaker's wide range of capabilities to the full spectrum of machine learning workflows.
Table of Contents (20 chapters)
Section 1: Processing Data at Scale
Section 2: Model Training Challenges
Section 3: Manage and Monitor Models
Section 4: Automate and Operationalize Machine Learning

Data preparation at scale with SageMaker Processing

Now let's turn our attention to preparing the entire dataset. At 500 GB, it's too large to process using sklearn on a single EC2 instance. We will write a SageMaker processing job that uses Spark ML for data preparation. (Alternatively, you can use Dask, but at the time of writing, SageMaker Processing does not provide a Dask container out of the box.)

The Processing Job part of this chapter's notebook walks you through launching the processing job. Note that we'll use a cluster of 15 EC2 instances to run the job (if you need limits raised, you can contact AWS support).

Also note that up until now, we've been working with the uncompressed JSON version of the data. This format containing thousands of small JSON files is not ideal for Spark processing as the Spark executors will spend a lot of time doing I/O. Luckily, the OpenAQ dataset also includes a gzipped Parquet version of the data. Compression...