Book Image

Keras Deep Learning Cookbook

By : Rajdeep Dua, Sujit Pal, Manpreet Singh Ghotra
Book Image

Keras Deep Learning Cookbook

By: Rajdeep Dua, Sujit Pal, Manpreet Singh Ghotra

Overview of this book

Keras has quickly emerged as a popular deep learning library. Written in Python, it allows you to train convolutional as well as recurrent neural networks with speed and accuracy. The Keras Deep Learning Cookbook shows you how to tackle different problems encountered while training efficient deep learning models, with the help of the popular Keras library. Starting with installing and setting up Keras, the book demonstrates how you can perform deep learning with Keras in the TensorFlow. From loading data to fitting and evaluating your model for optimal performance, you will work through a step-by-step process to tackle every possible problem faced while training deep models. You will implement convolutional and recurrent neural networks, adversarial networks, and more with the help of this handy guide. In addition to this, you will learn how to train these models for real-world image and language processing tasks. By the end of this book, you will have a practical, hands-on understanding of how you can leverage the power of Python and Keras to perform effective deep learning
Table of Contents (17 chapters)
Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
Index

Boundary seeking GAN


Following the objective function shown here, in the original GAN paper

Where:

  • x: Data
  • pg: Generator's distribution over data x
  • p(z): A priori on input noise variable
  • G(Z,θg): Map prior to data space
  • G: Differentiable function represented by multi-layer perceptron with parameters θg
  • D(x,θg): Discriminator-second multilayer perceptron which outputs a single scalar
  • D(x): The probability that x came from the data rather than pg

The objective is to train D to maximize the probability of assigning the correct label to both training examples and samples from G. We simultaneously train G to minimize Log(1-D(G(Z))); the optimal discriminator DG(X) is given by: 

Where 

is the real distribution, which can be found by rearranging the terms shown in the preceding example:

The assumption is that if we train D(x) more and more, it will come closer and closer to DG(X) and our GAN training becomes better and better. For optimal generator, 

, the optimal generator has a value of 0.5. Notice that...