Book Image

Hands-On Deep Learning for Games

By : Micheal Lanham
Book Image

Hands-On Deep Learning for Games

By: Micheal Lanham

Overview of this book

The number of applications of deep learning and neural networks has multiplied in the last couple of years. Neural nets has enabled significant breakthroughs in everything from computer vision, voice generation, voice recognition and self-driving cars. Game development is also a key area where these techniques are being applied. This book will give an in depth view of the potential of deep learning and neural networks in game development. We will take a look at the foundations of multi-layer perceptron’s to using convolutional and recurrent networks. In applications from GANs that create music or textures to self-driving cars and chatbots. Then we introduce deep reinforcement learning through the multi-armed bandit problem and other OpenAI Gym environments. As we progress through the book we will gain insights about DRL techniques such as Motivated Reinforcement Learning with Curiosity and Curriculum Learning. We also take a closer look at deep reinforcement learning and in particular the Unity ML-Agents toolkit. By the end of the book, we will look at how to apply DRL and the ML-Agents toolkit to enhance, test and automate your games or simulations. Finally, we will cover your possible next steps and possible areas for future learning.
Table of Contents (18 chapters)
Free Chapter
1
Section 1: The Basics
6
Section 2: Deep Reinforcement Learning
14
Section 3: Building Games

Understanding Backplay

In late 2018, Cinjon Resnick released an innovative paper, titled Backplay: Man muss immer umkehren, (https://arxiv.org/abs/1807.06919) that introduced a refined form of Curriculum Learning called Backplay. The basic premise is that you start the agent more or less at the goal, and then progressively move the agent back during training. This method may not work for all situations, but we will use this method with Curriculum Training to see how we can improve the VisualHallway example in the following exercise:

  1. Open the VisualHallway scene from the Assets | ML-Agents | Examples | Hallway | Scenes folder.
  2. Make sure the scene is reset to the default starting point. If you need to, pull down the source from ML-Agents again.
  3. Set the scene for learning using the VisualHallwayLearning brain, and make sure that the agent is just using the default visual observations...