Book Image

Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

Book Image

Learn Unity ML-Agents ??? Fundamentals of Unity Machine Learning

Overview of this book

Unity Machine Learning agents allow researchers and developers to create games and simulations using the Unity Editor, which serves as an environment where intelligent agents can be trained with machine learning methods through a simple-to-use Python API. This book takes you from the basics of Reinforcement and Q Learning to building Deep Recurrent Q-Network agents that cooperate or compete in a multi-agent ecosystem. You will start with the basics of Reinforcement Learning and how to apply it to problems. Then you will learn how to build self-learning advanced neural networks with Python and Keras/TensorFlow. From there you move o n to more advanced training scenarios where you will learn further innovative ways to train your network with A3C, imitation, and curriculum learning models. By the end of the book, you will have learned how to build more complex environments by building a cooperative and competitive multi-agent ecosystem.
Table of Contents (8 chapters)

Summary

In our final chapter together, we built a larger multi-agent training scenario called Terrarium, modeled after the original Microsoft Terrarium, a developer game developed by Microsoft in 2002 as a way of promoting the security features of .NET. We first spent time understanding the old rules and theme of the original game and those rules our creature agents would need to follow when building our simulation. From there, we pulled down some useful assets to make our simulation a little more game-like. Then, we built the foundations of our terrarium and created our first creature, the plant. The plant is essential to the life and training of our higher level agents like the herbivore, which was the next creature we built and started training as an ML-Agent in our scene. After building the herbivore, we moved onto building a carnivore creature as a way to balance and finish...