Book Image

Learn Amazon SageMaker

By : Julien Simon
Book Image

Learn Amazon SageMaker

By: Julien Simon

Overview of this book

Amazon SageMaker enables you to quickly build, train, and deploy machine learning (ML) models at scale, without managing any infrastructure. It helps you focus on the ML problem at hand and deploy high-quality models by removing the heavy lifting typically involved in each step of the ML process. This book is a comprehensive guide for data scientists and ML developers who want to learn the ins and outs of Amazon SageMaker. You’ll understand how to use various modules of SageMaker as a single toolset to solve the challenges faced in ML. As you progress, you’ll cover features such as AutoML, built-in algorithms and frameworks, and the option for writing your own code and algorithms to build ML models. Later, the book will show you how to integrate Amazon SageMaker with popular deep learning libraries such as TensorFlow and PyTorch to increase the capabilities of existing models. You’ll also learn to get the models to production faster with minimum effort and at a lower cost. Finally, you’ll explore how to use Amazon SageMaker Debugger to analyze, detect, and highlight problems to understand the current model state and improve model accuracy. By the end of this Amazon book, you’ll be able to use Amazon SageMaker on the full spectrum of ML workflows, from experimentation, training, and monitoring to scaling, deployment, and automation.
Table of Contents (19 chapters)
1
Section 1: Introduction to Amazon SageMaker
4
Section 2: Building and Training Models
11
Section 3: Diving Deeper on Training
14
Section 4: Managing Models in Production

Distributing training jobs

Distributed training lets you scale training jobs by running them on a cluster of CPU or GPU instances. These may train either on the full dataset or on a fraction of it, depending on the distribution policy that we configure. FullyReplicated distributes the full dataset to each instance. ShardedByS3Key distributes an equal number of input files to each instance, which is where splitting your dataset into many files comes in handy.

Distributing training for built-in algorithms

Distributed training is available for almost all built-in algorithms. Semantic Segmentation and LDA are notable exceptions.

As built-in algorithms are implemented with Apache MXNet, training instances use its Key-Value Store to exchange results. It's set up automatically by SageMaker on one of the training instances. Curious minds can learn more at https://mxnet.apache.org/api/faq/distributed_training.

Distributing training for built-in frameworks

You can use distributed...