Book Image

Applied Machine Learning and High-Performance Computing on AWS

By : Mani Khanuja, Farooq Sabir, Shreyas Subramanian, Trenton Potgieter
Book Image

Applied Machine Learning and High-Performance Computing on AWS

By: Mani Khanuja, Farooq Sabir, Shreyas Subramanian, Trenton Potgieter

Overview of this book

Machine learning (ML) and high-performance computing (HPC) on AWS run compute-intensive workloads across industries and emerging applications. Its use cases can be linked to various verticals, such as computational fluid dynamics (CFD), genomics, and autonomous vehicles. This book provides end-to-end guidance, starting with HPC concepts for storage and networking. It then progresses to working examples on how to process large datasets using SageMaker Studio and EMR. Next, you’ll learn how to build, train, and deploy large models using distributed training. Later chapters also guide you through deploying models to edge devices using SageMaker and IoT Greengrass, and performance optimization of ML models, for low latency use cases. By the end of this book, you’ll be able to build, train, and deploy your own large-scale ML application, using HPC on AWS, following industry best practices and addressing the key pain points encountered in the application life cycle.
Table of Contents (20 chapters)
1
Part 1: Introducing High-Performance Computing
6
Part 2: Applied Modeling
13
Part 3: Driving Innovation Across Industries

Performance Optimization for Real-Time Inference

Machine Learning (ML) and Deep Learning (DL) models are used in almost every industry, such as e-commerce, manufacturing, life sciences, and finance. Due to this, there have been meaningful innovations to improve the performance of these models. Since the introduction of transformer-based models in 2018, which were initially developed for Natural Language Processing (NLP) applications, the size of the models and the datasets required to train the models has grown exponentially. Transformer-based models are now used for forecasting as well as computer vision applications, in addition to NLP.

Let’s travel back in time a little to understand the growth in size of these models. Embeddings from Language Models (ELMo), which was introduced in 2018, had 93.6 million parameters, while the Generative Pretrained Transformer model (also known as GPT-3), in 2020, had 175 billion parameters. Today, we have DL models such as Switch Transformers...