Book Image

Hands-On Deep Learning for Games

By : Micheal Lanham
Book Image

Hands-On Deep Learning for Games

By: Micheal Lanham

Overview of this book

The number of applications of deep learning and neural networks has multiplied in the last couple of years. Neural nets has enabled significant breakthroughs in everything from computer vision, voice generation, voice recognition and self-driving cars. Game development is also a key area where these techniques are being applied. This book will give an in depth view of the potential of deep learning and neural networks in game development. We will take a look at the foundations of multi-layer perceptron’s to using convolutional and recurrent networks. In applications from GANs that create music or textures to self-driving cars and chatbots. Then we introduce deep reinforcement learning through the multi-armed bandit problem and other OpenAI Gym environments. As we progress through the book we will gain insights about DRL techniques such as Motivated Reinforcement Learning with Curiosity and Curriculum Learning. We also take a closer look at deep reinforcement learning and in particular the Unity ML-Agents toolkit. By the end of the book, we will look at how to apply DRL and the ML-Agents toolkit to enhance, test and automate your games or simulations. Finally, we will cover your possible next steps and possible areas for future learning.
Table of Contents (18 chapters)
Free Chapter
1
Section 1: The Basics
6
Section 2: Deep Reinforcement Learning
14
Section 3: Building Games

What this book covers

Chapter 1, Deep Learning for Games, covers the background of deep learning in games before moving on to cover the basics by building a basic perceptron. From there, we will learn the concepts of network layers and build a simple autoencoder.

Chapter 2, Convolutional and Recurrent Networks, explores advanced layers, known as convolution and pooling, and how to apply them to building a self-driving deep network. Then, we will look at the concept of learning sequences with recurrent layers in deep networks.

Chapter 3, GAN for Games, outlines the concept of a GAN, a generative adversarial network or an architectural pattern that pits two competing networks against one another. We will then explore and use various GANs to generate a game texture and original music.

Chapter 4, Building a Deep Learning Gaming Chatbot, goes into further detail regarding recurrent networks and develops a few forms of conversational chatbot. We will finish the chapter by allowing the chatbot to be chatted with through Unity.

Chapter 5, Introduction DRL, begins with the basics of reinforcement learning before moving on to cover multi-arm bandits and Q-Learning. We will then quickly move on to integrating deep learning and will explore deep reinforcement learning using the Open AI Gym environment.

Chapter 6, Unity ML-Agents, begins by exploring the ML-Agents toolkit, which is a powerful deep reinforcement learning platform built on top of Unity. We will then learn how to set up and train various demo scenarios provided with the toolkit.

Chapter 7, Agent and the Environment, explores how an input state captured from the environment affects training. We will look at ways to improve these issues by building different input state encoders for various visual environments.

Chapter 8, Understanding PPO, explains how learning to train agents requires some in-depth background knowledge of the various algorithms used in DRL. In this chapter, we will explore in depth the powerhouse of the ML-Agents toolkit, the proximal policy optimization algorithm.

Chapter 9, Rewards and Reinforcement Learning, explains how rewards are foundational to RL, exploring their importance and how to model their functions. We'll also explore the sparsity of rewards and ways of overcoming these problems in RL with Curriculum Learning and backplay.

Chapter 10, Imitation and Transfer Learning, explores further advanced methods, imitation and transfer learning, as ways of overcoming the sparsity of rewards and other agent training problems. We will then look at others ways of applying transfer learning i.

Chapter 11, Building Multi-Agent Environments, explores a number of scenarios that incorporate multiple agents competing against or cooperating with each other.

Chapter 12, Debugging/Testing a Game with DRL, explains how to build a testing/debugging framework with ML-Agents for use on your next game, which is one new aspect of DRL that is less well covered.

Chapter 13, Obstacle Tower Challenge and Beyond, explores what is next for you. Are you prepared to take on the Unity Obstacle Tower challenge and build your own game, or perhaps you require further learning?