Book Image

Generative Adversarial Networks Cookbook

By : Josh Kalin
Book Image

Generative Adversarial Networks Cookbook

By: Josh Kalin

Overview of this book

Developing Generative Adversarial Networks (GANs) is a complex task, and it is often hard to find code that is easy to understand. This book leads you through eight different examples of modern GAN implementations, including CycleGAN, simGAN, DCGAN, and 2D image to 3D model generation. Each chapter contains useful recipes to build on a common architecture in Python, TensorFlow and Keras to explore increasingly difficult GAN architectures in an easy-to-read format. The book starts by covering the different types of GAN architecture to help you understand how the model works. This book also contains intuitive recipes to help you work with use cases involving DCGAN, Pix2Pix, and so on. To understand these complex applications, you will take different real-world data sets and put them to use. By the end of this book, you will be equipped to deal with the challenges and issues that you may face while working with GAN models, thanks to easy-to-follow code solutions that you can implement right away.
Table of Contents (17 chapters)
Title Page
Copyright and Credits
About Packt
Dedication
Contributors
Preface
Dedication2
Index

Pseudocode – how does it work?


With every technique, we need to understand the baseline algorithm before we can lay down any code. So, in this section, we'll discuss how the training algorithm works.

Getting ready

In this section, we'll be referring to the SimGAN paper once again.

How to do it...

In the SimGAN paper, the authors provided a convenient graphic for users to base their development on. We already know that we need to develop models for each of the networks, but how do we train a network in the first place? The following diagram offers an explanation:

Algorithm 

Let's convert the preceding diagram into the following, tangible steps:

  1. Read both synthetic images and real images into variables.
  2. Then, for every epoch, do the following:
    • Train the refiner networks on a random mini batch for K_Gtimes
    • Train the discriminator network on a random mini batch for K_D times
  3. Stop when the number of epochs reached, or lost, has not changed significantly for nepochs.