Book Image

Mastering PyTorch - Second Edition

By : Ashish Ranjan Jha
4 (1)
Book Image

Mastering PyTorch - Second Edition

4 (1)
By: Ashish Ranjan Jha

Overview of this book

PyTorch is making it easier than ever before for anyone to build deep learning applications. This PyTorch deep learning book will help you uncover expert techniques to get the most out of your data and build complex neural network models. You’ll build convolutional neural networks for image classification and recurrent neural networks and transformers for sentiment analysis. As you advance, you'll apply deep learning across different domains, such as music, text, and image generation, using generative models, including diffusion models. You'll not only build and train your own deep reinforcement learning models in PyTorch but also learn to optimize model training using multiple CPUs, GPUs, and mixed-precision training. You’ll deploy PyTorch models to production, including mobile devices. Finally, you’ll discover the PyTorch ecosystem and its rich set of libraries. These libraries will add another set of tools to your deep learning toolbelt, teaching you how to use fastai to prototype models and PyTorch Lightning to train models. You’ll discover libraries for AutoML and explainable AI (XAI), create recommendation systems, and build language and vision transformers with Hugging Face. By the end of this book, you'll be able to perform complex deep learning tasks using PyTorch to build smart artificial intelligence models.
Table of Contents (21 chapters)
20
Index

Understanding text-to-image generation using diffusion

Recall Figure 10.8, where we demonstrated the training process of the UNet model for generating images using diffusion. We trained the UNet model to learn noise from an input noisy image. To facilitate text-to-image generation, we need to add text as an additional input to this UNet model, as demonstrated in Figure 10.18 (in contrast to Figure 10.8):

Figure 10.18: UNet trained on both an input (noisy) image as well as text to predict the noise within the noisy image

Such a UNet model is called a conditional UNet model [11], or a text-conditional UNet model to be precise, as this model generates an image conditioned on the input text. So, how do we train such a model?

There are two parts to the answer to this question. We first need to encode the input text into an embedding vector that can be ingested into the UNet model. Then we need to modify the UNet model slightly to accommodate the extra incoming data (besides...