Book Image

Hands-On Generative Adversarial Networks with PyTorch 1.x

By : John Hany, Greg Walters
Book Image

Hands-On Generative Adversarial Networks with PyTorch 1.x

By: John Hany, Greg Walters

Overview of this book

With continuously evolving research and development, Generative Adversarial Networks (GANs) are the next big thing in the field of deep learning. This book highlights the key improvements in GANs over generative models and guides in making the best out of GANs with the help of hands-on examples. This book starts by taking you through the core concepts necessary to understand how each component of a GAN model works. You'll build your first GAN model to understand how generator and discriminator networks function. As you advance, you'll delve into a range of examples and datasets to build a variety of GAN networks using PyTorch functionalities and services, and become well-versed with architectures, training strategies, and evaluation methods for image generation, translation, and restoration. You'll even learn how to apply GAN models to solve problems in areas such as computer vision, multimedia, 3D models, and natural language processing (NLP). The book covers how to overcome the challenges faced while building generative models from scratch. Finally, you'll also discover how to train your GAN models to generate adversarial examples to attack other CNN and GAN models. By the end of this book, you will have learned how to build, train, and optimize next-generation GAN models and use them to solve a variety of real-world problems.
Table of Contents (15 chapters)
Free Chapter
1
Section 1: Introduction to GANs and PyTorch
5
Section 2: Typical GAN Models for Image Synthesis

CycleGAN – image-to-image translation from unpaired collections

You may have noticed that, when training pix2pix, we need to determine a direction (AtoB or BtoA) that the images are translated to. Does this mean that, if we want to freely translate from image set A to image set B and vice versa, we need to train two models separately? Not with CycleGAN, we say!

CycleGAN was proposed by Jun-Yan Zhu, Taesung Park, and Phillip Isola, et. al. in their paper, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. It is a bidirectional generative model based on unpaired image collections. The core idea of CycleGAN is built on the assumption of cycle consistency, which means that if we have two generative models, G and F, that translate between two sets of images, X and Y, in which Y=G(X) and X=F(Y), we can naturally assume that F(G(X)) should be very...