Book Image

Accelerate Deep Learning Workloads with Amazon SageMaker

By : Vadim Dabravolski
Book Image

Accelerate Deep Learning Workloads with Amazon SageMaker

By: Vadim Dabravolski

Overview of this book

Over the past 10 years, deep learning has grown from being an academic research field to seeing wide-scale adoption across multiple industries. Deep learning models demonstrate excellent results on a wide range of practical tasks, underpinning emerging fields such as virtual assistants, autonomous driving, and robotics. In this book, you will learn about the practical aspects of designing, building, and optimizing deep learning workloads on Amazon SageMaker. The book also provides end-to-end implementation examples for popular deep-learning tasks, such as computer vision and natural language processing. You will begin by exploring key Amazon SageMaker capabilities in the context of deep learning. Then, you will explore in detail the theoretical and practical aspects of training and hosting your deep learning models on Amazon SageMaker. You will learn how to train and serve deep learning models using popular open-source frameworks and understand the hardware and software options available for you on Amazon SageMaker. The book also covers various optimizations technique to improve the performance and cost characteristics of your deep learning workloads. By the end of this book, you will be fluent in the software and hardware aspects of running deep learning workloads using Amazon SageMaker.
Table of Contents (16 chapters)
1
Part 1: Introduction to Deep Learning on Amazon SageMaker
6
Part 2: Building and Training Deep Learning Models
10
Part 3: Serving Deep Learning Models

Optimizing data storage and retrieval

When training SOTA DL models, you typically need a large dataset for a model to train. It can be expensive to store and retrieve such large datasets. For instance, the popular computer vision dataset COCO2017 is approximately 30 GB, while the Common Crawl dataset for NLP tasks has a size of hundreds of TB. Dealing with such large datasets requires careful consideration of where to store the dataset and how to retrieve it at inference or training time. In this section, we will discuss some of the optimization strategies you can use when choosing storage and retrieval strategies.

Choosing a storage solution

When choosing an optimal storage solution, you may consider the following factors, among others:

  • The cost of storage and data retrieval
  • The latency and throughput requirements for data retrieval
  • Data partitioning
  • How frequently data is refreshed

Let’s take a look at the pros and cons of various storage solutions...