Book Image

Python Reinforcement Learning

By : Sudharsan Ravichandiran, Sean Saito, Rajalingappaa Shanmugamani, Yang Wenzhuo
Book Image

Python Reinforcement Learning

By: Sudharsan Ravichandiran, Sean Saito, Rajalingappaa Shanmugamani, Yang Wenzhuo

Overview of this book

Reinforcement Learning (RL) is the trending and most promising branch of artificial intelligence. This Learning Path will help you master not only the basic reinforcement learning algorithms but also the advanced deep reinforcement learning algorithms. The Learning Path starts with an introduction to RL followed by OpenAI Gym, and TensorFlow. You will then explore various RL algorithms, such as Markov Decision Process, Monte Carlo methods, and dynamic programming, including value and policy iteration. You'll also work on various datasets including image, text, and video. This example-rich guide will introduce you to deep RL algorithms, such as Dueling DQN, DRQN, A3C, PPO, and TRPO. You will gain experience in several domains, including gaming, image processing, and physical simulations. You'll explore TensorFlow and OpenAI Gym to implement algorithms that also predict stock prices, generate natural language, and even build other neural networks. You will also learn about imagination-augmented agents, learning from human preference, DQfD, HER, and many of the recent advancements in RL. By the end of the Learning Path, you will have all the knowledge and experience needed to implement RL and deep RL in your projects, and you enter the world of artificial intelligence to solve various real-life problems. This Learning Path includes content from the following Packt products: • Hands-On Reinforcement Learning with Python by Sudharsan Ravichandiran • Python Reinforcement Learning Projects by Sean Saito, Yang Wenzhuo, and Rajalingappaa Shanmugamani
Table of Contents (27 chapters)
Title Page
About Packt
Contributors
Preface
Index

Deterministic policy gradient


As discussed in the previous chapter, DQN uses the Q-network to estimate the state-action value function, which has a separate output for each available action. Therefore, the Q-network cannot be applied, due to the continuous action space. A careful reader may remember that there is another architecture of the Q-network that takes both the state and the action as its inputs, and outputs the estimate of the corresponding Q-value. This architecture doesn't require the number of available actions to be finite, and has the capability to deal with continuous input actions:

If we use this kind of network to estimate the state-action value function, there must be another network that defines the behavior policy of the agent, namely outputting a proper action given the observed state. In fact, this is the intuition behind actor-critic reinforcement learning algorithms. The actor-critic architecture contains two parts:

  1. Actor: The actor defines the behavior policy of the...