Book Image

Augmented Reality with Kinect

By : Rui Wang
Book Image

Augmented Reality with Kinect

By: Rui Wang

Overview of this book

Microsoft Kinect changes the notion of user interface design. It differs from most other user input controllers as it enables users to interact with the program without touching the mouse or a trackpad. It utilizes motion sensing technology and all it needs is a real-time cameras, tracked skeletons, and gestures. Augmented Reality with Kinect will help you get into the world of Microsoft Kinect programming with the C/C++ language. The book will cover the installation, image streaming, skeleton and face tracking, multi-touch cursors and gesture emulation. Finally, you will end up with a complete Kinect-based game. Augmented Reality with Kinect will help you get into the world of Kinect programming, with a few interesting recipes and a relatively complete example. The book will introduce the following topics: the installation and initialization of Kinect applications; capturing color and depth images; obtaining skeleton and face tracking data; emulating multi-touch cursors and gestures; and developing a complete game using Kinect features. The book is divided in such a way so as to ensure that each topic is given the right amount of focus. Beginners will start from the first chapter and build up to developing their own applications.
Table of Contents (14 chapters)
Augmented Reality with Kinect
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Face tracking in Kinect


Face detecting and tracking is a famous computer vision technology. It analyzes the images from a webcam or other input devices, and tries to determine the locations and sizes of human faces from these inputs. Some detailed face parts can also be guessed from the given images, including eyes, eyebrows, nose, and the mouth. We can even determine the emotion of a specific face, or the identity of a human from the face tracking results.

The Microsoft Kinect SDK supports face tracking from Version 1.5 onwards. It requires color and depth images from the sensors (or customized sources) as inputs, and returns the position data of the detected head, as well as some important recording points on the face, all of which can be retrieved or used for reconstructing the 3D face mesh in real time.

We will explain how the Microsoft Kinect SDK declares and generates the face mesh at the end of this chapter.