Book Image

Kinect for Windows SDK Programming Guide

By : Abhijit Jana
Book Image

Kinect for Windows SDK Programming Guide

By: Abhijit Jana

Overview of this book

Kinect has been a game-changer in the world of motion games and applications since its first release. It has been touted as a controller for Microsoft Xbox but is much more than that. The developer version of Kinect, Kinect for Windows SDK, provides developers with the tools to develop applications that run on Windows. You can use this to develop applications that make interaction with your computer hands-free. This book focuses on developing applications using the Kinect for Windows SDK. It is a complete end to end solution using different features of Kinect for Windows SDK with step by step guidance. The book will also help you develop motion sensitive and speech recognition enabled applications. You will also learn about building application using multiple Kinects.The book begins with explaining the different components of Kinect and then moves into to the setting up the device and getting thedevelopment environment ready. You will be surprised at how quickly the book takes you through the details of Kinect APIs. You will use NUI to use the Kinect for Natural Inputs like skeleton tracking, sensing, speech recognizing. You will capture different types of stream, and images, handle stream event, and capture frame. Kinect device contains a motorized tilt to control sensor angles, you will learn how to adjust it automatically. The last part of the book teaches you how to build application using multiple Kinects and discuss how Kinect can be used to integrate with other devices such as Windows Phone and microcontroller.
Table of Contents (19 chapters)
Kinect for Windows SDK Programming Guide
Credits
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Preface
Index

Multiple Kinects – how to reduce interference


When we talk about multiple Kinects, the first question that comes to our mind is interference. We know that Kinect measures the depth data by reading the IR patterns projected by an IR emitter. When there are multiple sensors placed in the same area the projected IR from the multiple sensors can interfere with one another. In such scenarios, Kinect sensors will return incorrect data, that is, X, Y, and Z values for interference affect the IR dots.

Note

Why can't Kinect distinguish its own projected IR from the IR projected by other sensors? This could have fixed the interference issue. The answer to this is that the IR laser is not modulated.

You can think of some examples of technology that can modulate the individual patterns coming from each Kinect. Based on the pattern sensor, you can identify its corresponding "dot". However, no support for this exists in Kinect SDK.

When there are multiple Kinects, the IR from each of them can interfere with...