Book Image

Reinforcement Learning with TensorFlow

By : Sayon Dutta
Book Image

Reinforcement Learning with TensorFlow

By: Sayon Dutta

Overview of this book

Reinforcement learning (RL) allows you to develop smart, quick and self-learning systems in your business surroundings. It's an effective method for training learning agents and solving a variety of problems in Artificial Intelligence - from games, self-driving cars and robots, to enterprise applications such as data center energy saving (cooling data centers) and smart warehousing solutions. The book covers major advancements and successes achieved in deep reinforcement learning by synergizing deep neural network architectures with reinforcement learning. You'll also be introduced to the concept of reinforcement learning, its advantages and the reasons why it's gaining so much popularity. You'll explore MDPs, Monte Carlo tree searches, dynamic programming such as policy and value iteration, and temporal difference learning such as Q-learning and SARSA. You will use TensorFlow and OpenAI Gym to build simple neural network models that learn from their own actions. You will also see how reinforcement learning algorithms play a role in games, image processing and NLP. By the end of this book, you will have gained a firm understanding of what reinforcement learning is and understand how to put your knowledge to practical use by leveraging the power of TensorFlow and OpenAI Gym.
Table of Contents (21 chapters)
Title Page
Packt Upsell

Policy gradients

As per the policy gradient theorem, for the previous specified policy objective functions and any differentiable policy 

 the policy gradient is as follows:

Steps to update parameters using the Monte Carlo policy gradient based approach is shown in the following section.

The Monte Carlo policy gradient

In the Monte Carlo policy gradient approach, we update the parameters by the stochastic gradient ascent method, using the update as per policy gradient theorem and 

 as an unbiased sample of 

. Here, 

 is the cumulative reward from that time-step onward.

The Monte Carlo policy gradient approach is as follows:

arbitrarily for each episode as per the current policy
do for step t=1 to T-1 do
end for end for Output: final

Actor-critic algorithms

The preceding policy optimization using the Monte Carlo policy gradient approach leads to high variance. In order to tackle this issue, we use a critic to estimate the state-action value function, that is as follows:

This gives...