Book Image

Kinect for Windows SDK Programming Guide

By : Abhijit Jana
Book Image

Kinect for Windows SDK Programming Guide

By: Abhijit Jana

Overview of this book

Kinect has been a game-changer in the world of motion games and applications since its first release. It has been touted as a controller for Microsoft Xbox but is much more than that. The developer version of Kinect, Kinect for Windows SDK, provides developers with the tools to develop applications that run on Windows. You can use this to develop applications that make interaction with your computer hands-free. This book focuses on developing applications using the Kinect for Windows SDK. It is a complete end to end solution using different features of Kinect for Windows SDK with step by step guidance. The book will also help you develop motion sensitive and speech recognition enabled applications. You will also learn about building application using multiple Kinects.The book begins with explaining the different components of Kinect and then moves into to the setting up the device and getting thedevelopment environment ready. You will be surprised at how quickly the book takes you through the details of Kinect APIs. You will use NUI to use the Kinect for Natural Inputs like skeleton tracking, sensing, speech recognizing. You will capture different types of stream, and images, handle stream event, and capture frame. Kinect device contains a motorized tilt to control sensor angles, you will learn how to adjust it automatically. The last part of the book teaches you how to build application using multiple Kinects and discuss how Kinect can be used to integrate with other devices such as Windows Phone and microcontroller.
Table of Contents (19 chapters)
Kinect for Windows SDK Programming Guide
Credits
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Preface
Index

Skeleton space transformation


The Kinect sensor represents the skeleton data in a 3D coordinate system. With respect to the Kinect sensor and human body points, the x axis and y axis define the position of the joint and z axis represents the distance from the sensor. The overall representation of the skeleton data within global space is knows as skeleton space. The origin of the skeleton space is the depth images, which return the skeleton data with the set of joint positions. In the end, each joint position is represented with (x, y, z) coordinates.

With only the skeleton data, it is difficult to directly interact with the user. This is because the user's coordinate space is different than the skeleton joint information. So we need some approach to transform the skeleton joint's coordinate system into a global space where both the users and the application understand each other's coordinate system.

The Kinect for Windows SDK provides us with a set of APIs that allows us to easily translate...