Book Image

Advanced Deep Learning with TensorFlow 2 and Keras - Second Edition

By : Rowel Atienza
Book Image

Advanced Deep Learning with TensorFlow 2 and Keras - Second Edition

By: Rowel Atienza

Overview of this book

Advanced Deep Learning with TensorFlow 2 and Keras, Second Edition is a completely updated edition of the bestselling guide to the advanced deep learning techniques available today. Revised for TensorFlow 2.x, this edition introduces you to the practical side of deep learning with new chapters on unsupervised learning using mutual information, object detection (SSD), and semantic segmentation (FCN and PSPNet), further allowing you to create your own cutting-edge AI projects. Using Keras as an open-source deep learning library, the book features hands-on projects that show you how to create more effective AI with the most up-to-date techniques. Starting with an overview of multi-layer perceptrons (MLPs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs), the book then introduces more cutting-edge techniques as you explore deep neural network architectures, including ResNet and DenseNet, and how to create autoencoders. You will then learn about GANs, and how they can unlock new levels of AI performance. Next, you’ll discover how a variational autoencoder (VAE) is implemented, and how GANs and VAEs have the generative power to synthesize data that can be extremely convincing to humans. You'll also learn to implement DRL such as Deep Q-Learning and Policy Gradient Methods, which are critical to many modern results in AI.
Table of Contents (16 chapters)
14
Other Books You May Enjoy
15
Index

2. The Q value

If the RL problem is to find , how does the agent learn by interacting with the environment? Equation 9.1.3 does not explicitly indicate the action to try and the succeeding state to compute the return. In RL, it is easier to learn by using the Q value:

(Equation 9.2.1)

where:

(Equation 9.2.2)

In other words, instead of finding the policy that maximizes the value for all states, Equation 9.2.1 looks for the action that maximizes the quality (Q) value for all states. After finding the Q value function, and hence are determined by Equation 9.2.2 and Equation 9.1.3, respectively.

If, for every action, the reward and the next state can be observed, we can formulate the following iterative or trial-and-error algorithm to learn the Q value:

(Equation 9.2.3)

For notational simplicity, and are the next state and action, respectively. Equation 9.2.3 is known as the Bellman equation, which is the core of the Q-learning algorithm. Q-learning...