Book Image

Intelligent Workloads at the Edge

By : Indraneel Mitra, Ryan Burke
Book Image

Intelligent Workloads at the Edge

By: Indraneel Mitra, Ryan Burke

Overview of this book

The Internet of Things (IoT) has transformed how people think about and interact with the world. The ubiquitous deployment of sensors around us makes it possible to study the world at any level of accuracy and enable data-driven decision-making anywhere. Data analytics and machine learning (ML) powered by elastic cloud computing have accelerated our ability to understand and analyze the huge amount of data generated by IoT. Now, edge computing has brought information technologies closer to the data source to lower latency and reduce costs. This book will teach you how to combine the technologies of edge computing, data analytics, and ML to deliver next-generation cyber-physical outcomes. You’ll begin by discovering how to create software applications that run on edge devices with AWS IoT Greengrass. As you advance, you’ll learn how to process and stream IoT data from the edge to the cloud and use it to train ML models using Amazon SageMaker. The book also shows you how to train these models and run them at the edge for optimized performance, cost savings, and data compliance. By the end of this IoT book, you’ll be able to scope your own IoT workloads, bring the power of ML to the edge, and operate those workloads in a production setting.
Table of Contents (17 chapters)
1
Section 1: Introduction and Prerequisites
3
Section 2: Building Blocks
10
Section 3: Scaling It Up
13
Section 4: Bring It All Together

Hands-on with ML architecture

In this section, you will deploy a solution on a connected HBS hub that will require you to build and train ML models on the cloud and then deploy them to the edge for inferencing. The following screenshot shows the architecture of the lab with the highlighted steps (1-5) that you will complete:

Figure 7.17 – Hands-on ML architecture

Your objectives include the following, which are highlighted as distinct steps in the preceding architecture:

  • Build the ML workflow using Amazon SageMaker
  • Deploy the ML model from the cloud to the edge using AWS IoT Greengrass
  • Perform ML inferencing on the edge and visualize the results

The following table shows the list of components you will use during the lab:

Figure 7.18 – Hands-on lab components

Building the ML workflow

In this section, you will build, train, and test the ML model using Amazon SageMaker Studio...