Book Image

The Reinforcement Learning Workshop

By : Alessandro Palmas, Emanuele Ghelfi, Dr. Alexandra Galina Petre, Mayur Kulkarni, Anand N.S., Quan Nguyen, Aritra Sen, Anthony So, Saikat Basak
Book Image

The Reinforcement Learning Workshop

By: Alessandro Palmas, Emanuele Ghelfi, Dr. Alexandra Galina Petre, Mayur Kulkarni, Anand N.S., Quan Nguyen, Aritra Sen, Anthony So, Saikat Basak

Overview of this book

Various intelligent applications such as video games, inventory management software, warehouse robots, and translation tools use reinforcement learning (RL) to make decisions and perform actions that maximize the probability of the desired outcome. This book will help you to get to grips with the techniques and the algorithms for implementing RL in your machine learning models. Starting with an introduction to RL, youÔÇÖll be guided through different RL environments and frameworks. YouÔÇÖll learn how to implement your own custom environments and use OpenAI baselines to run RL algorithms. Once youÔÇÖve explored classic RL techniques such as Dynamic Programming, Monte Carlo, and TD Learning, youÔÇÖll understand when to apply the different deep learning methods in RL and advance to deep Q-learning. The book will even help you understand the different stages of machine-based problem-solving by using DARQN on a popular video game Breakout. Finally, youÔÇÖll find out when to use a policy-based method to tackle an RL problem. By the end of The Reinforcement Learning Workshop, youÔÇÖll be equipped with the knowledge and skills needed to solve challenging problems using reinforcement learning.
Table of Contents (14 chapters)
Preface
Free Chapter
2
2. Markov Decision Processes and Bellman Equations

Improving Policy Gradients

In this section, we will learn the various approaches that will help us improve the policy gradient approach that we learned about in the previous section. We will learn about techniques such as TRPO and PPO.

We will also learn about the A2C technique in brief. Let's understand the TRPO optimization technique in the next section.

Trust Region Policy Optimization

In most cases, RL is very sensitive to the initialization of weights. Take, for instance, the learning rate. If our learning rate is too high, then it may so happen that our policy update takes our policy network to a region of the parameter space where the next batch of data it collects is gathered against a very poor policy. This might cause our network to never recover again. Now, we will talk about newer methods that try to get rid of this problem. But before we do that, let's have a quick recap of what we have already covered.

In the Policy Gradients section, we defined...