Book Image

Hands-On Reinforcement Learning for Games

By : Micheal Lanham
Book Image

Hands-On Reinforcement Learning for Games

By: Micheal Lanham

Overview of this book

With the increased presence of AI in the gaming industry, developers are challenged to create highly responsive and adaptive games by integrating artificial intelligence into their projects. This book is your guide to learning how various reinforcement learning techniques and algorithms play an important role in game development with Python. Starting with the basics, this book will help you build a strong foundation in reinforcement learning for game development. Each chapter will assist you in implementing different reinforcement learning techniques, such as Markov decision processes (MDPs), Q-learning, actor-critic methods, SARSA, and deterministic policy gradient algorithms, to build logical self-learning agents. Learning these techniques will enhance your game development skills and add a variety of features to improve your game agent’s productivity. As you advance, you’ll understand how deep reinforcement learning (DRL) techniques can be used to devise strategies to help agents learn from their actions and build engaging games. By the end of this book, you’ll be ready to apply reinforcement learning techniques to build a variety of projects and contribute to open source applications.
Table of Contents (19 chapters)
1
Section 1: Exploring the Environment
7
Section 2: Exploiting the Knowledge
15
Section 3: Reward Yourself

Working with a DQN on Atari

Now that we've looked at the output CNNs produce in terms of filters, the best way to understand how this works is to look at the code that constructs them. Before we get to that, though, let's begin a new exercise where we use a new form of DQN to solve Atari:

  1. Open this chapter's sample code, which can be found in the Chapter_7_DQN_CNN.py file. The code is fairly similar to Chapter_6_lunar.py but with some critical differences. We will just focus on the differences in this exercise. If you need a better explanation of the code, review Chapter 6, Going Deep with DQN:
from wrappers import *
  1. Starting at the top, the only change is a new import from a local file called wrappers.py. We will examine what this does by creating the environment:
env_id = 'PongNoFrameskip-v4'
env = make_atari(env_id)
env = wrap_deepmind(env)
env = wrap_pytorch...