Book Image

Mastering PyTorch

By : Ashish Ranjan Jha
Book Image

Mastering PyTorch

By: Ashish Ranjan Jha

Overview of this book

Deep learning is driving the AI revolution, and PyTorch is making it easier than ever before for anyone to build deep learning applications. This PyTorch book will help you uncover expert techniques to get the most out of your data and build complex neural network models. The book starts with a quick overview of PyTorch and explores using convolutional neural network (CNN) architectures for image classification. You'll then work with recurrent neural network (RNN) architectures and transformers for sentiment analysis. As you advance, you'll apply deep learning across different domains, such as music, text, and image generation using generative models and explore the world of generative adversarial networks (GANs). You'll not only build and train your own deep reinforcement learning models in PyTorch but also deploy PyTorch models to production using expert tips and techniques. Finally, you'll get to grips with training large models efficiently in a distributed manner, searching neural architectures effectively with AutoML, and rapidly prototyping models using PyTorch and fast.ai. By the end of this PyTorch book, you'll be able to perform complex deep learning tasks using PyTorch to build smart artificial intelligence models.
Table of Contents (20 chapters)
1
Section 1: PyTorch Overview
4
Section 2: Working with Advanced Neural Network Architectures
8
Section 3: Generative Models and Deep Reinforcement Learning
13
Section 4: PyTorch in Production Systems

Discussing GRUs and attention-based models

In the final section of this chapter, we will briefly look at GRUs, how they are similar yet different from LSTMs, and how to initialize a GRU model using PyTorch. We will also look at attention-based (RNNs). We will conclude this section by describing how attention-only (no recurrence or convolutions)-based models outperform the recurrent family of neural models when it comes to sequence modeling tasks.

GRUs and PyTorch

As we discussed in the Exploring the evolution of recurrent networks section, GRUs are a type of memory cell with two gates – a reset gate and an update gate, as well as one hidden state vector. In terms of configuration, GRUs are simpler than LSTMs and yet equally effective in dealing with the exploding and vanishing gradients problem. Tons of research has been done to compare the performance of LSTMs and GRUs. While both perform better than the simple RNNs on various sequence-related tasks, one is slightly better...