Book Image

Enterprise Augmented Reality Projects

By : Jorge R. López Benito, Enara Artetxe González
Book Image

Enterprise Augmented Reality Projects

By: Jorge R. López Benito, Enara Artetxe González

Overview of this book

Augmented reality (AR) is expanding its scope from just being used in mobile and game applications to enterprise. Different industries are using AR to enhance assembly line visualization, guide operators performing difficult tasks, attract more customers, and even improve training techniques. In this book, you'll gain comprehensive insights into different aspects of developing AR-based apps for six different enterprise sectors, focusing on market needs and choosing the most suitable tool in each case. You'll delve into the basics of Unity and get familiar with Unity assets, materials, and resources, which will help you build a strong foundation for working on the different AR projects covered in the book. You'll build real-world projects for various industries such as marketing, retail, and automation in a step-by-step manner. This will give you hands-on experience in developing your own industrial AR apps. While building the projects, you'll explore various AR frameworks used in the enterprise environment such as Vuforia, EasyAR, ARCore, and ARKit, and understand how they can be used by themselves or integrated into the Unity 3D engine to create AR markers, 3D models, and components of an AR app. By the end of this book, you'll be well versed in using different commercial AR frameworks as well as Unity for building robust AR projects.
Table of Contents (10 chapters)

Understanding AR

AR is the term that's used to describe the technology that allows users to visualize part of the real world through the camera of a technological device (smartphone, tablet, or AR glasses) with virtual graphical information that's been added by this device. The device adds this virtual information to existing physical information. By doing this, tangible physical elements combine with virtual elements, thus creating augmented reality in real-time. The following image shows how AR works:

A user seeing a 3D apple in AR with a tablet

Now, we are going to look at the beginnings of AR and learn how AR can be divided according to its functionality.

Short history – the beginnings of a new reality

AR is not a new technology. The beginnings of AR begin with the machine that was invented by Morton Heilig, a philosopher, visionary, and filmmaker, when, in 1957, he began to build a prototype with an appearance similar to the arcade video game machines that were very popular in the 90s. The following image shows a schema of how the prototype worked:

A schema on how the invention worked (This image is created by Morton Heilig)

Morton called his invention Sensorama, an experience that projected 3D images, added a surround sound, made the seat vibrate, and created wind that was thrown as air at the viewer. The closest similar experience we can feel today is seeing a movie in a 4D cinema, but these experiences were created more than 60 years ago.

In 1968, Harvard Electrical Engineering professor Ivan Sutherland created a device that would be the key to the future of the AR technology known as the Human-Mounted Display (HMD). Far from the AR glasses that we know of today, this HMD, called the Sword of Damocles, was a huge machine that hung from the ceiling of a laboratory and worked when the user was placed in the right place. In the following image, you can see what this invention looked like:

The Sword of Damocles (this image was created by OyundariZorigtbaatar) 

In 1992, Boeing researcher Tom Caudell invented the term AR, and at the same time, AR technology was boosted from two other works. The first AR system, from L.B. Rosenberg, who works for the United States Air Force, is a device that gives advice to the user on how to perform certain tasks as they are presented, something like a virtual guide. This can be seen in the following image:

Virtual Fixtures AR system on the left and its view on the right (this image was created by AR Trends)

The other research in this area was led at Columbia University, where a team of scientists invented an HMD that interacted with a printer. The device, baptized as Karma (AR based on knowledge for maintenance assistance), projected a 3D image to tell the user how to recharge the printer, instead of going to the user manual.

The following diagram is a representation of the continuum of advanced computer interfaces, based on Milgram and Kishino (1994), where we can see the different subdivisions of the MIXED REALITY (MR) that go from the REAL ENVIRONMENT to the VIRTUAL REALITY. AR that's located nearer to the REAL ENVIRONMENT is divided between spatial AR and see-through AR. However, the appearance of mobile devices in the 21st century has allowed a different version of AR, where we can display it using the device screen and camera:

MIXED REALITY and its subdivisions

Now that we have introduced the beginnings of AR, let's learn how this technology can be classified depending on the trigger that's used to show virtual elements in the real world.

The magic behind AR

AR can be created in many ways; the main challenge is how to make the combination of the real and virtual worlds as seamless as possible. Based on what is used to trigger the virtual elements to appear in the real world, AR can be classified as follows:

  • GPS coordinates: We use GPS coordinates, compasses, and accelerometers to locate the exact position of the user, including the cardinal point they are looking at. Depending on where the user is pointing to, they will see some virtual objects or others from the same position.
  • Black and white markers: We use very simple images, similar to black and white QR codes, to project virtual objects on them. This was one of the first AR examples, although nowadays they are used less often as there are more realistic ways to create the AR experience.
  • Image markers: We use the camera of the mobile device to locate predefined images (also called targets or markers) and then project virtual objects over them. This type of AR has substituted black and white markers.
  • Real-time markers: The user creates and defines their own images with the mobile camera to project any virtual object in them.
  • Facial recognition: Through the camera, we capture the movements of the face to execute certain actions in a request, for example, to give facial expressions to a virtual avatar.
  • SLAM: Short for Simultaneous Localization And Mapping, this technology understands the physical world through feature points, thereby making it possible for AR applications to recognize 3D objects and scenes, as well as to instantly track the world, and overlay digital interactive augmentations.
  • Beacons: eBeacons, RFID, and NFC are identification systems that use radio frequency or bluetooth, similar to GPS coordinates, to trigger the AR elements.

Now, you have a better grasp of what AR is and where it comes from. We have covered the basics of AR by looking at the first prototypes, and classified different types of AR according to the element that triggers the virtual images so that they appear on the screen. The next step is to see what is required to work with it.