Book Image

Accelerate Model Training with PyTorch 2.X

By : Maicon Melo Alves
Book Image

Accelerate Model Training with PyTorch 2.X

By: Maicon Melo Alves

Overview of this book

Penned by an expert in High-Performance Computing (HPC) with over 25 years of experience, this book is your guide to enhancing the performance of model training using PyTorch, one of the most widely adopted machine learning frameworks. You’ll start by understanding how model complexity impacts training time before discovering distinct levels of performance tuning to expedite the training process. You’ll also learn how to use a new PyTorch feature to compile the model and train it faster, alongside learning how to benefit from specialized libraries to optimize the training process on the CPU. As you progress, you’ll gain insights into building an efficient data pipeline to keep accelerators occupied during the entire training execution and explore strategies for reducing model complexity and adopting mixed precision to minimize computing time and memory consumption. The book will get you acquainted with distributed training and show you how to use PyTorch to harness the computing power of multicore systems and multi-GPU environments available on single or multiple machines. By the end of this book, you’ll be equipped with a suite of techniques, approaches, and strategies to speed up training , so you can focus on what really matters—building stunning models!
Table of Contents (17 chapters)
Free Chapter
1
Part 1: Paving the Way
4
Part 2: Going Faster
10
Part 3: Going Distributed

Quiz time!

Let’s review what we have learned in this chapter by answering a few questions. Initially, try to answer these questions without consulting the material.

Note

The answers to all these questions are available at https://github.com/PacktPublishing/Accelerate-Model-Training-with-PyTorch-2.X/blob/main/quiz/chapter08-answers.md.

Before starting the quiz, remember that this is not a test! This section aims to complement your learning process by revising and consolidating the content covered in this chapter.

Choose the correct option for the following questions.

  1. What are the two main reasons for distributing the training process?
    1. Reliability and performance improvement.
    2. Leak of memory and power consumption.
    3. Power consumption and performance improvement.
    4. Leak of memory and performance improvement.
  2. Which are the two main parallel strategies to distribute the training process?
    1. Model and data parallelism.
    2. Model and hardware parallelism.
    3. Hardware and data parallelism...