Book Image

Accelerate Model Training with PyTorch 2.X

By : Maicon Melo Alves
Book Image

Accelerate Model Training with PyTorch 2.X

By: Maicon Melo Alves

Overview of this book

Penned by an expert in High-Performance Computing (HPC) with over 25 years of experience, this book is your guide to enhancing the performance of model training using PyTorch, one of the most widely adopted machine learning frameworks. You’ll start by understanding how model complexity impacts training time before discovering distinct levels of performance tuning to expedite the training process. You’ll also learn how to use a new PyTorch feature to compile the model and train it faster, alongside learning how to benefit from specialized libraries to optimize the training process on the CPU. As you progress, you’ll gain insights into building an efficient data pipeline to keep accelerators occupied during the entire training execution and explore strategies for reducing model complexity and adopting mixed precision to minimize computing time and memory consumption. The book will get you acquainted with distributed training and show you how to use PyTorch to harness the computing power of multicore systems and multi-GPU environments available on single or multiple machines. By the end of this book, you’ll be equipped with a suite of techniques, approaches, and strategies to speed up training , so you can focus on what really matters—building stunning models!
Table of Contents (17 chapters)
Free Chapter
1
Part 1: Paving the Way
4
Part 2: Going Faster
10
Part 3: Going Distributed

Summary

In this chapter, you learned that adopting a mixed-precision approach can accelerate the training process of our models.

Although it is possible to implement the mixed precision strategy by hand, it is preferable to rely on the AMP solution provided by PyTorch since it is an elegant and seamless process that’s designed to avoid the occurrence of errors involving numeric representation. When this kind of error occurs, they are very hard to identify and solve.

Implementing AMP on PyTorch requires adding a few extra lines to the original code. Essentially, we must wrap the training loop with the AMP engine, enable four flags related to backend libraries, and instantiate a gradient scaler.

Depending on the GPU architecture, library version, and the model itself, we can significantly improve the performance of the training process.

This chapter closes the second part of this book. Next, in the third and last part, we will learn how to spread the training process...