Book Image

Learn ARCore - Fundamentals of Google ARCore

Book Image

Learn ARCore - Fundamentals of Google ARCore

Overview of this book

Are you a mobile developer or web developer who wants to create immersive and cool Augmented Reality apps with the latest Google ARCore platform? If so, this book will help you jump right into developing with ARCore and will help you create a step by step AR app easily. This book will teach you how to implement the core features of ARCore starting from the fundamentals of 3D rendering to more advanced concepts such as lighting, shaders, Machine Learning, and others. We’ll begin with the basics of building a project on three platforms: web, Android, and Unity. Next, we’ll go through the ARCore concepts of motion tracking, environmental understanding, and light estimation. For each core concept, you’ll work on a practical project to use and extend the ARCore feature, from learning the basics of 3D rendering and lighting to exploring more advanced concepts. You’ll write custom shaders to light virtual objects in AR, then build a neural network to recognize the environment and explore even grander applications by using ARCore in mixed reality. At the end of the book, you’ll see how to implement motion tracking and environment learning, create animations and sounds, generate virtual characters, and simulate them on your screen.
Table of Contents (17 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

Interacting with the environment


We know that ARCore will provide us with identified feature points and planes/surfaces it recognizes around the user. From those identified points or planes, we can attach virtual objects. Since ARCore keeps track of these points and planes for us, as the user moves objects, those that are attached to a plane remain fixed. Except, how do we determine where a user is trying to place an object? In order to do that, we use a technique called ray casting. Ray casting takes the point of touch in two dimensions and casts a ray into the scene. This ray is then tested against other objects in the scene for collisions. The following diagram shows how this works:

Example of ray casting from device screen to 3D space

You, of course, have likely already seen this work countless times. Not only the sample app, but virtually every 3D application uses ray casting for object interaction and collision detection. Now that we understand how ray casting works, let's see how this...