Book Image

Hands-On Reinforcement Learning with Python

By : Sudharsan Ravichandiran
Book Image

Hands-On Reinforcement Learning with Python

By: Sudharsan Ravichandiran

Overview of this book

Reinforcement Learning (RL) is the trending and most promising branch of artificial intelligence. Hands-On Reinforcement learning with Python will help you master not only the basic reinforcement learning algorithms but also the advanced deep reinforcement learning algorithms. The book starts with an introduction to Reinforcement Learning followed by OpenAI Gym, and TensorFlow. You will then explore various RL algorithms and concepts, such as Markov Decision Process, Monte Carlo methods, and dynamic programming, including value and policy iteration. This example-rich guide will introduce you to deep reinforcement learning algorithms, such as Dueling DQN, DRQN, A3C, PPO, and TRPO. You will also learn about imagination-augmented agents, learning from human preference, DQfD, HER, and many more of the recent advancements in reinforcement learning. By the end of the book, you will have all the knowledge and experience needed to implement reinforcement learning and deep reinforcement learning in your projects, and you will be all set to enter the world of artificial intelligence.
Table of Contents (16 chapters)

Hierarchical reinforcement learning

The problem with RL is that it cannot scale well with a large number of state spaces and actions, which ultimately leads to the curse of dimensionality. Hierarchical reinforcement learning (HRL) is proposed to solve the curse of dimensionality where we decompress large problems into small subproblems in a hierarchy. Let's say the agent's goal is to reach its home from school. Here the problem is split into a set of subgoals such as going out of the school gate, booking a cab, and so on.

There are different methods used in HRL such as state-space decomposition, state abstraction, and temporal abstraction. In state-space decomposition, we decompose the state space into different subspaces and try to solve the problem in a smaller subspace. Breaking down the state space also allows faster exploration as the agent does not want to explore...