Book Image

Deep Learning with PyTorch Lightning

By : Kunal Sawarkar
3.5 (2)
Book Image

Deep Learning with PyTorch Lightning

3.5 (2)
By: Kunal Sawarkar

Overview of this book

Building and implementing deep learning (DL) is becoming a key skill for those who want to be at the forefront of progress.But with so much information and complex study materials out there, getting started with DL can feel quite overwhelming. Written by an AI thought leader, Deep Learning with PyTorch Lightning helps researchers build their first DL models quickly and easily without getting stuck on the complexities. With its help, you’ll be able to maximize productivity for DL projects while ensuring full flexibility – from model formulation to implementation. Throughout this book, you’ll learn how to configure PyTorch Lightning on a cloud platform, understand the architectural components, and explore how they are configured to build various industry solutions. You’ll build a neural network architecture, deploy an application from scratch, and see how you can expand it based on your specific needs, beyond what the framework can provide. In the later chapters, you’ll also learn how to implement capabilities to build and train various models like Convolutional Neural Nets (CNN), Natural Language Processing (NLP), Time Series, Self-Supervised Learning, Semi-Supervised Learning, Generative Adversarial Network (GAN) using PyTorch Lightning. By the end of this book, you’ll be able to build and deploy DL models with confidence.
Table of Contents (15 chapters)
1
Section 1: Kickstarting with PyTorch Lightning
6
Section 2: Solving using PyTorch Lightning
11
Section 3: Advanced Topics

GAN training challenges

A GAN model requires a lot of compute resources for training a model in order to get a good result, especially when a dataset is not very clean and representations in an image are not very easy to learn. In order to get a very clean output with sharp representations in our fake generated image, we need to pass a higher resolution image as input to our GAN model. However, the higher resolution means a lot more parameters are needed in the model, which in turn requires much more memory to train the model.

Here is an example scenario. We have trained our models using the image size of 64 pixels, but if we increase the image size to 128 pixels, then the number of parameters in the GAN model increases drastically from 15.9 M to 93.4 M. This, in turn, requires much more compute power to train the model, and with the limited resources in the Google Collab environment, you might get an error similar to this after 20–25 epochs:

RuntimeError: CUDA out of...