Book Image

Hands-On Artificial Intelligence on Amazon Web Services

By : Subhashini Tripuraneni, Charles Song
1 (1)
Book Image

Hands-On Artificial Intelligence on Amazon Web Services

1 (1)
By: Subhashini Tripuraneni, Charles Song

Overview of this book

From data wrangling through to translating text, you can accomplish this and more with the artificial intelligence and machine learning services available on AWS. With this book, you’ll work through hands-on exercises and learn to use these services to solve real-world problems. You’ll even design, develop, monitor, and maintain machine and deep learning models on AWS. The book starts with an introduction to AI and its applications in different industries, along with an overview of AWS artificial intelligence and machine learning services. You’ll then get to grips with detecting and translating text with Amazon Rekognition and Amazon Translate. The book will assist you in performing speech-to-text with Amazon Transcribe and Amazon Polly. Later, you’ll discover the use of Amazon Comprehend for extracting information from text, and Amazon Lex for building voice chatbots. You will also understand the key capabilities of Amazon SageMaker such as wrangling big data, discovering topics in text collections, and classifying images. Finally, you’ll cover sales forecasting with deep learning and autoregression, before exploring the importance of a feedback loop in machine learning. By the end of this book, you will have the skills you need to implement AI in AWS through hands-on exercises that cover all aspects of the ML model life cycle.
Table of Contents (19 chapters)
Free Chapter
1
Section 1: Introduction and Anatomy of a Modern AI Application
4
Section 2: Building Applications with AWS AI Services
9
Section 3: Training Machine Learning Models with Amazon SageMaker
15
Section 4: Machine Learning Model Monitoring and Governance

Running hyperparameter optimization (HPO)

It takes data scientists numerous hours and experiments to arrive at an optimal set of hyperparameters that are required for best model performance. This process is mostly based on trial and error.

Although GridSearch is one of the techniques that is traditionally used by data scientists, it suffers from the curse of dimensionality. For example, if we have two hyperparameters, with each taking five possible values, we're looking at calculating objective function 25 times (5 x 5). As the number of hyperparameters grows, the number of times that the objective function is computed blows out of proportion.

Random Search addresses this issue by randomly selecting values of hyperparameters, without doing an exhaustive search of every single combination of hyperparameters. This paper by Bergstra et al. claims that a random search of the...