Book Image

Hands-On Reinforcement Learning for Games

By : Micheal Lanham
Book Image

Hands-On Reinforcement Learning for Games

By: Micheal Lanham

Overview of this book

With the increased presence of AI in the gaming industry, developers are challenged to create highly responsive and adaptive games by integrating artificial intelligence into their projects. This book is your guide to learning how various reinforcement learning techniques and algorithms play an important role in game development with Python. Starting with the basics, this book will help you build a strong foundation in reinforcement learning for game development. Each chapter will assist you in implementing different reinforcement learning techniques, such as Markov decision processes (MDPs), Q-learning, actor-critic methods, SARSA, and deterministic policy gradient algorithms, to build logical self-learning agents. Learning these techniques will enhance your game development skills and add a variety of features to improve your game agent’s productivity. As you advance, you’ll understand how deep reinforcement learning (DRL) techniques can be used to devise strategies to help agents learn from their actions and build engaging games. By the end of this book, you’ll be ready to apply reinforcement learning techniques to build a variety of projects and contribute to open source applications.
Table of Contents (19 chapters)
1
Section 1: Exploring the Environment
7
Section 2: Exploiting the Knowledge
15
Section 3: Reward Yourself

What this book covers

Chapter 1, Understanding Rewards-Based Learning, explores the basics of learning, what it is to learn, and how RL differs from other, more classic learning methods. From there, we explore how the Markov decision process works in code and how it relates to learning. This leads us to the classic multi-armed and contextual bandit problems. Finally, we will learn about Q-learning and quality-based model learning.

Chapter 2, Dynamic Programming and the Bellman Equation, digs deeper into dynamic programming and explores how the Bellman equation can be intertwined into RL. Here, you will learn how the Bellman equation is used to update a policy. We then go further into detail about policy iteration or value iteration methods using our understanding of Q-learning, by training an agent on a new grid-style environment.

Chapter 3, Monte Carlo Methods, explores model-based methods and how they can be used to train agents on more classic board games.

Chapter 4, Temporal Difference Learning, explores the heart of RL and how it solves the temporal credit assignment problem often discussed in academia. We apply temporal difference learning (TDL) to Q-learning and use it to solve a grid world environment (such as FrozenLake).

Chapter 5, Exploring SARSA, goes deeper into the fundamentals of on-policy methods such as SARSA. We will explore policy-based learning through understanding the partially observable Markov decision process. Then, we'll look at how we can implement SARSA with Q-learning. This will set the stage for the more advanced policy methods that we will explore in later chapters, called PPO and TRPO.

Chapter 6, Going Deep with DQN, takes the Q-learning model and integrates it with deep learning to create advanced agents known as deep Q-learning networks (DQNs). From this, we explain how basic deep learning models work for regression or, in this case, to solve the Q equation. We will use DQNs in the CartPole environment.

Chapter 7, Going Deeper with DDQNs, looks at how extensions to deep learning (DL) called convolutional neural networks (CNNs) can be used to observe a visual state. We will then use that knowledge to play Atari games and look at further enhancements.

Chapter 8, Policy Gradient Methods, delves into more advanced policy methods and how they integrate into deep RL agents. This is an advanced chapter as it covers higher-level calculus and probability concepts. You will get to experience the MuJoCo animation RL environment in this chapter as a reward for your hard work.

Chapter 9, Optimizing for Continuous Control, looks at improving the policy methods we looked at previously for continuously controlling advanced environments. We start off by setting up and installing the MuJoCo environment. After that, we look at a novel improvement called recurrent networks for capturing context and see how recurrent networks are applied on top of PPO. Then we get back into the actor-critic method and this time look at asynchronous actor-critic in a couple of different configurations, before finally progressing to actor-critic with experience replay.

Chapter 10, All Together Rainbow DQN, tells us all about Rainbow. Google DeepMind recently explored the combination of a number of RL enhancements all together in an algorithm called Rainbow. Rainbow is another advanced toolkit that you can explore and either borrow from or use to work with more advanced RL environments.

Chapter 11, Exploiting ML-Agents, looks at how we can either use elements from the ML-Agents toolkit in our own agents or use the toolkit to get a fully developed agent.

Chapter 12, DRL Frameworks, opens up the possibilities of playing with solo agents in various environments. We will explore various multi-agent environments as well.

Chapter 13, 3D Worlds, trains us to use RL agents effectively to tackle a variety of 3D environmental challenges.

Chapter 14, From DRL to AGI, looks beyond DRL and into the realm of AGI, or at least where we hope we are going with AGI. We will also looks at various DRL algorithms that can be applied in the real world.