Book Image

Intelligent Workloads at the Edge

By : Indraneel Mitra, Ryan Burke
Book Image

Intelligent Workloads at the Edge

By: Indraneel Mitra, Ryan Burke

Overview of this book

The Internet of Things (IoT) has transformed how people think about and interact with the world. The ubiquitous deployment of sensors around us makes it possible to study the world at any level of accuracy and enable data-driven decision-making anywhere. Data analytics and machine learning (ML) powered by elastic cloud computing have accelerated our ability to understand and analyze the huge amount of data generated by IoT. Now, edge computing has brought information technologies closer to the data source to lower latency and reduce costs. This book will teach you how to combine the technologies of edge computing, data analytics, and ML to deliver next-generation cyber-physical outcomes. You’ll begin by discovering how to create software applications that run on edge devices with AWS IoT Greengrass. As you advance, you’ll learn how to process and stream IoT data from the edge to the cloud and use it to train ML models using Amazon SageMaker. The book also shows you how to train these models and run them at the edge for optimized performance, cost savings, and data compliance. By the end of this IoT book, you’ll be able to scope your own IoT workloads, bring the power of ML to the edge, and operate those workloads in a production setting.
Table of Contents (17 chapters)
1
Section 1: Introduction and Prerequisites
3
Section 2: Building Blocks
10
Section 3: Scaling It Up
13
Section 4: Bring It All Together

Summary

This chapter taught you the key differences in remote deploying of components to your Greengrass devices, how to accelerate your solution development using managed components provided by AWS, and getting your first ML workload on your prototype hub device. At this point, you have all the basics you need to start writing your own components and designing edge solutions. You can even extend the managed ML components to get started with some basic CV projects. If you have trained ML models and inference code being used in your business today as containerized code, you could get started with deploying them to the edge now as custom components.

In the next chapter, Chapter 5, Ingesting and Streaming Data from the Edge, you will learn more about how data moves throughout the edge in prescriptive structures, models, and transformations. Proper handling of data at the edge is important for adding efficiency, resilience, and security to your edge ML solutions.