Book Image

Hands-On Reinforcement Learning with Python

By : Sudharsan Ravichandiran
Book Image

Hands-On Reinforcement Learning with Python

By: Sudharsan Ravichandiran

Overview of this book

Reinforcement Learning (RL) is the trending and most promising branch of artificial intelligence. Hands-On Reinforcement learning with Python will help you master not only the basic reinforcement learning algorithms but also the advanced deep reinforcement learning algorithms. The book starts with an introduction to Reinforcement Learning followed by OpenAI Gym, and TensorFlow. You will then explore various RL algorithms and concepts, such as Markov Decision Process, Monte Carlo methods, and dynamic programming, including value and policy iteration. This example-rich guide will introduce you to deep reinforcement learning algorithms, such as Dueling DQN, DRQN, A3C, PPO, and TRPO. You will also learn about imagination-augmented agents, learning from human preference, DQfD, HER, and many more of the recent advancements in reinforcement learning. By the end of the book, you will have all the knowledge and experience needed to implement reinforcement learning and deep reinforcement learning in your projects, and you will be all set to enter the world of artificial intelligence.
Table of Contents (16 chapters)

Long Short-Term Memory RNN

RNNs are pretty cool, right? But we have seen a problem in training the RNNs called the vanishing gradient problem. Let's explore that a bit. The sky is __. An RNN can easily predict the last word as blue based on the information it has seen. But an RNN cannot cover long-term dependencies. What does that mean? Let's say Archie lived in China for 20 years. He loves listening to good music. He is a very big comic fan. He is fluent in _. Now, you would predict the blank as Chinese. How did you predict that? Because you understood that Archie lived for 20 years in China, you thought he might be fluent in Chinese. But an RNN cannot retain all of this information in memory to say that Archie is fluent in Chinese. Due to the vanishing gradient problem, it cannot recollect/remember the information for a long time in memory. How do we solve that?

Here...