Book Image

Reinforcement Learning Algorithms with Python

By : Andrea Lonza
Book Image

Reinforcement Learning Algorithms with Python

By: Andrea Lonza

Overview of this book

Reinforcement Learning (RL) is a popular and promising branch of AI that involves making smarter models and agents that can automatically determine ideal behavior based on changing requirements. This book will help you master RL algorithms and understand their implementation as you build self-learning agents. Starting with an introduction to the tools, libraries, and setup needed to work in the RL environment, this book covers the building blocks of RL and delves into value-based methods, such as the application of Q-learning and SARSA algorithms. You'll learn how to use a combination of Q-learning and neural networks to solve complex problems. Furthermore, you'll study the policy gradient methods, TRPO, and PPO, to improve performance and stability, before moving on to the DDPG and TD3 deterministic algorithms. This book also covers how imitation learning techniques work and how Dagger can teach an agent to drive. You'll discover evolutionary strategies and black-box optimization techniques, and see how they can improve RL algorithms. Finally, you'll get to grips with exploration approaches, such as UCB and UCB1, and develop a meta-algorithm called ESBAS. By the end of the book, you'll have worked with key RL algorithms to overcome challenges in real-world applications, and be part of the RL research community.
Table of Contents (19 chapters)
Free Chapter
1
Section 1: Algorithms and Environments
5
Section 2: Model-Free RL Algorithms
11
Section 3: Beyond Model-Free Algorithms and Improvements
17
Assessments

MDP

An MDP expresses the problem of sequential decision-making, where actions influence the next states and the results. MDPs are general and flexible enough to provide a formalization of the problem of learning a goal through interactions, the same problem that is addressed with RL. Thus we can express and reason with RL problems in terms of MDPs.

An MDP is four-tuple (S,A,P,R):

  • S is the state space, with a finite set of states.
  • A is the action space, with a finite set of actions.
  • P is a transition function, which defines the probability of reaching a state, s′, from s through an action, a. In P(s′, s, a) = p(s′| s, a), the transition function is equal to the conditional probability of s′ given s and a.
  • R is the reward function, which determines the value received for transitioning to state s′ after taking action a from state s.

An illustration...