Book Image

Python Reinforcement Learning Projects

By : Sean Saito, Yang Wenzhuo, Rajalingappaa Shanmugamani
Book Image

Python Reinforcement Learning Projects

By: Sean Saito, Yang Wenzhuo, Rajalingappaa Shanmugamani

Overview of this book

Reinforcement learning is one of the most exciting and rapidly growing fields in machine learning. This is due to the many novel algorithms developed and incredible results published in recent years. In this book, you will learn about the core concepts of RL including Q-learning, policy gradients, Monte Carlo processes, and several deep reinforcement learning algorithms. As you make your way through the book, you'll work on projects with datasets of various modalities including image, text, and video. You will gain experience in several domains, including gaming, image processing, and physical simulations. You'll explore technologies such as TensorFlow and OpenAI Gym to implement deep learning reinforcement learning algorithms that also predict stock prices, generate natural language, and even build other neural networks. By the end of this book, you will have hands-on experience with eight reinforcement learning projects, each addressing different topics and/or algorithms. We hope these practical exercises will provide you with better intuition and insight about the field of reinforcement learning and how to apply its algorithms to various problems in real life.
Table of Contents (17 chapters)
Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
Index

Asynchronous advantage actor-critic algorithm


In the previous chapters, we discussed the DQN for playing Atari games and the use of the DPG and TRPO algorithms for continuous control tasks. Recall that DQN has the following architecture:

At each timestep

, the agent observes the frame image 

and selects an action 

based on the current learned policy. The emulator (the Minecraft environment) executes this action and returns the next frame image 

and the corresponding reward

. The quadruplet 

is then stored in the experience memory and is taken as a sample for training the Q-network by minimizing the empirical loss function via stochastic gradient descent.

Deep reinforcement learning algorithms based on experience replay have achieved unprecedented success in playing Atari games. However, experience replay has several disadvantages:

  • It uses more memory and computation per real interaction
  • It requires off-policy learning algorithms that can update from data generated by an older policy

In order...