Book Image

Learn Amazon SageMaker - Second Edition

By : Julien Simon
Book Image

Learn Amazon SageMaker - Second Edition

By: Julien Simon

Overview of this book

Amazon SageMaker enables you to quickly build, train, and deploy machine learning models at scale without managing any infrastructure. It helps you focus on the machine learning problem at hand and deploy high-quality models by eliminating the heavy lifting typically involved in each step of the ML process. This second edition will help data scientists and ML developers to explore new features such as SageMaker Data Wrangler, Pipelines, Clarify, Feature Store, and much more. You'll start by learning how to use various capabilities of SageMaker as a single toolset to solve ML challenges and progress to cover features such as AutoML, built-in algorithms and frameworks, and writing your own code and algorithms to build ML models. The book will then show you how to integrate Amazon SageMaker with popular deep learning libraries, such as TensorFlow and PyTorch, to extend the capabilities of existing models. You'll also see how automating your workflows can help you get to production faster with minimum effort and at a lower cost. Finally, you'll explore SageMaker Debugger and SageMaker Model Monitor to detect quality issues in training and production. By the end of this Amazon book, you'll be able to use Amazon SageMaker on the full spectrum of ML workflows, from experimentation, training, and monitoring to scaling, deployment, and automation.
Table of Contents (19 chapters)
1
Section 1: Introduction to Amazon SageMaker
4
Section 2: Building and Training Models
11
Section 3: Diving Deeper into Training
14
Section 4: Managing Models in Production

Building a fully custom container for SageMaker Processing

We'll reuse the news headlines example from Chapter 6, Training Natural Processing Models:

  1. We start with a Dockerfile based on a minimal Python image. We install dependencies, add our processing script, and define it as our entry point:
    FROM python:3.7-slim
    RUN pip3 install --no-cache gensim nltk sagemaker
    RUN python3 -m nltk.downloader stopwords wordnet
    ADD preprocessing-lda-ntm.py /
    ENTRYPOINT ["python3", "/preprocessing-lda-ntm.py"]
  2. We build the image and tag it as sm-processing-custom:latest:
    $ docker build -t sm-processing-custom:latest -f Dockerfile .

    The resulting image is 497 MB. For comparison, it's 1.2 GB if we start from python:3.7 instead of python:3.7-slim. This makes it faster to push and download.

  3. Using the AWS CLI, we create a repository in Amazon ECR to host this image, and we log in to the repository:
    $ aws ecr create-repository --repository-name sm-processing-custom...