Book Image

TensorFlow Reinforcement Learning Quick Start Guide

By : Kaushik Balakrishnan
Book Image

TensorFlow Reinforcement Learning Quick Start Guide

By: Kaushik Balakrishnan

Overview of this book

Advances in reinforcement learning algorithms have made it possible to use them for optimal control in several different industrial applications. With this book, you will apply Reinforcement Learning to a range of problems, from computer games to autonomous driving. The book starts by introducing you to essential Reinforcement Learning concepts such as agents, environments, rewards, and advantage functions. You will also master the distinctions between on-policy and off-policy algorithms, as well as model-free and model-based algorithms. You will also learn about several Reinforcement Learning algorithms, such as SARSA, Deep Q-Networks (DQN), Deep Deterministic Policy Gradients (DDPG), Asynchronous Advantage Actor-Critic (A3C), Trust Region Policy Optimization (TRPO), and Proximal Policy Optimization (PPO). The book will also show you how to code these algorithms in TensorFlow and Python and apply them to solve computer games from OpenAI Gym. Finally, you will also learn how to train a car to drive autonomously in the Torcs racing car simulator. By the end of the book, you will be able to design, build, train, and evaluate feed-forward neural networks and convolutional neural networks. You will also have mastered coding state-of-the-art algorithms and also training agents for various control problems.
Table of Contents (11 chapters)

Summary

In this chapter, we were introduced to our first continuous actions RL algorithm, DDPG, which also happens to be the first Actor-Critic algorithm in this book. DDPG is an off-policy algorithm, as it uses a replay buffer. We also covered the use of policy gradients to update the actor, and the use of the L2 norm to update the critic. Thus, we have two different neural networks. The actor learns the policy and the critic learns to evaluate the actor's policy, thereby providing a learning signal to the actor. You saw how to compute the gradient of the state-action value, Q(s,a), with respect to the action, and also the gradient of the policy, both of which are combined to evaluate the policy gradient, which is then used to update the actor. We trained the DDPG on the inverted pendulum problem, and the agent learned it very well.

We have come a long way in this chapter...