Book Image

Hands-On Artificial Intelligence on Amazon Web Services

By : Subhashini Tripuraneni, Charles Song
1 (1)
Book Image

Hands-On Artificial Intelligence on Amazon Web Services

1 (1)
By: Subhashini Tripuraneni, Charles Song

Overview of this book

From data wrangling through to translating text, you can accomplish this and more with the artificial intelligence and machine learning services available on AWS. With this book, you’ll work through hands-on exercises and learn to use these services to solve real-world problems. You’ll even design, develop, monitor, and maintain machine and deep learning models on AWS. The book starts with an introduction to AI and its applications in different industries, along with an overview of AWS artificial intelligence and machine learning services. You’ll then get to grips with detecting and translating text with Amazon Rekognition and Amazon Translate. The book will assist you in performing speech-to-text with Amazon Transcribe and Amazon Polly. Later, you’ll discover the use of Amazon Comprehend for extracting information from text, and Amazon Lex for building voice chatbots. You will also understand the key capabilities of Amazon SageMaker such as wrangling big data, discovering topics in text collections, and classifying images. Finally, you’ll cover sales forecasting with deep learning and autoregression, before exploring the importance of a feedback loop in machine learning. By the end of this book, you will have the skills you need to implement AI in AWS through hands-on exercises that cover all aspects of the ML model life cycle.
Table of Contents (19 chapters)
Free Chapter
1
Section 1: Introduction and Anatomy of a Modern AI Application
4
Section 2: Building Applications with AWS AI Services
9
Section 3: Training Machine Learning Models with Amazon SageMaker
15
Section 4: Machine Learning Model Monitoring and Governance

Deploying the trained NTM model and running the inference

In this section, we will deploy the NTM model, run the inference, and interpret the results. Let's get started:

  1. First, we deploy the trained NTM model as an endpoint, as follows:
ntm_predctr = ntm_estmtr.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')

In the preceding code, we call the deploy() method of the SageMaker Estimator object, ntm_estmtr, to create an endpoint. We pass the number and type of instances required to deploy the model. The NTM Docker image is used to create the endpoint. SageMaker takes a few minutes to deploy the model. The following screenshot shows the endpoint that was provisioned:

You can see the endpoint you've created by navigating to the SageMaker service, going to the left navigation pane, looking under the Inference section, and clicking on Endpoints.

...