Book Image

Hands-On Generative Adversarial Networks with Keras

By : Rafael Valle
Book Image

Hands-On Generative Adversarial Networks with Keras

By: Rafael Valle

Overview of this book

Generative Adversarial Networks (GANs) have revolutionized the fields of machine learning and deep learning. This book will be your first step toward understanding GAN architectures and tackling the challenges involved in training them. This book opens with an introduction to deep learning and generative models and their applications in artificial intelligence (AI). You will then learn how to build, evaluate, and improve your first GAN with the help of easy-to-follow examples. The next few chapters will guide you through training a GAN model to produce and improve high-resolution images. You will also learn how to implement conditional GANs that enable you to control characteristics of GAN output. You will build on your knowledge further by exploring a new training methodology for progressive growing of GANs. Moving on, you'll gain insights into state-of-the-art models in image synthesis, speech enhancement, and natural language generation using GANs. In addition to this, you'll be able to identify GAN samples with TequilaGAN. By the end of this book, you will be well-versed with the latest advancements in the GAN framework using various examples and datasets, and you will have developed the skills you need to implement GAN architectures for several tasks and domains, including computer vision, natural language processing (NLP), and audio processing. Foreword by Ting-Chun Wang, Senior Research Scientist, NVIDIA
Table of Contents (14 chapters)
Free Chapter
1
Section 1: Introduction and Environment Setup
4
Section 2: Training GANs
8
Section 3: Application of GANs in Computer Vision, Natural Language Processing, and Audio

Summary

In this chapter, we investigated numerical properties of samples produced with adversarial methods, especially Generative Adversarial Networks. We showed that fake samples have properties that are barely noticed within visuals of samples, namely the fact that, due to stochastic gradient descent and the requirements of differentiability, fake samples smoothly approximate the dominating modes of the distribution. We analyzed statistical measures of divergence between real data and other data, and the results showed that even in simple cases – for instance, distribution of pixel intensities – the divergence between training data and fake data is large with respect to test data.

Although not common practice, one could possibly circumvent the difference in support between the real and fake data by training Generators that explicitly sample a distribution that...