Book Image

Unity 2017 Game AI Programming - Third Edition - Third Edition

Book Image

Unity 2017 Game AI Programming - Third Edition - Third Edition

Overview of this book

Unity 2017 provides game and app developers with a variety of tools to implement Artificial Intelligence. Leveraging these tools via Unity's API or built-in features allows limitless possibilities when it comes to creating your game's worlds and characters. This third edition with Unity will help you break down Artificial Intelligence into simple concepts to give you a fundamental understanding of the topic to build upon. Using a variety of examples, the book then takes those concepts and walks you through actual implementations designed to highlight key concepts, and features related to game AI in Unity 5. Further on you will learn to distinguish the state machine pattern and implement one of your own. This is followed by learning how to implement a basic sensory system for your AI agent and coupling it with a Finite State Machine (FSM). Next you'll learn how to use Unity's built-in NavMesh feature and implement your own A* pathfinding system. You will then learn how to implement simple flocks and crowd's dynamics, key AI concepts. Moving on, you will learn how to implement a behavior tree through a game-focused example. Lastly, you'll combine fuzzy logic concepts with state machines and apply all the concepts in the book to build a simple tank game.
Table of Contents (10 chapters)

Seeing the world through our agent's eyes

In order to make our AI convincing, our agent needs to be able to respond to the events around him, the environment, the player, and even other agents. Much like real living organisms, our agent can rely on sight, sound, and other "physical" stimuli. However, we have the advantage of being able to access much more data within our game than a real organism can from their surroundings, such as the player's location, regardless of whether or not they are in the vicinity, their inventory, the location of items around the world, and any variable you chose to expose to that agent in your code:

In the preceding diagram, our agent's field of vision is represented by the cone in front of it, and its hearing range is represented by the grey circle surrounding it:

Vision, sound, and other senses can be thought of, at their most essential level, as data. Vision is just light particles, sound is just vibrations, and so on. While we don't need to replicate the complexity of a constant stream of light particles bouncing around and entering our agent's eyes, we can still model the data in a way that produces believable results.

As you might imagine, we can similarly model other sensory systems, and not just the ones used for biological beings such as sight, sound, or smell, but even digital and mechanical systems that can be used by enemy robots or towers, for example sonar and radar.

If you've ever played Metal Gear Solid, then you've definitely seen these concepts in action—an enemy's field of vision is denoted on the player's mini map as cone-shaped fields of view. Enter the cone and an exclamation mark appears over the enemy's head, followed by an unmistakable chime, letting the player know that they've been spotted.