Book Image

OpenNI Cookbook

By : Soroush Falahati
Book Image

OpenNI Cookbook

By: Soroush Falahati

Overview of this book

The release of Microsoft Kinect, then PrimeSense Sensor, and Asus Xtion opened new doors for developers to interact with users, re-design their application’s UI, and make them environment (context) aware. For this purpose, developers need a good framework which provides a complete application programming interface (API), and OpenNI is the first choice in this field. This book introduces the new version of OpenNI. "OpenNI Cookbook" will show you how to start developing a Natural Interaction UI for your applications or games with high level APIs and at the same time access RAW data from different sensors of different hardware supported by OpenNI using low level APIs. It also deals with expanding OpenNI by writing new modules and expanding applications using different OpenNI compatible middleware, including NITE. "OpenNI Cookbook" favors practical examples over plain theory, giving you a more hands-on experience to help you learn. OpenNI Cookbook starts with information about installing devices and retrieving RAW data from them, and then shows how to use this data in applications. You will learn how to access a device or how to read data from it and show them using OpenGL, or use middleware (especially NITE) to track and recognize users, hands, and guess the skeleton of a person in front of a device, all through examples.You also learn about more advanced aspects such as how to write a simple module or middleware for OpenNI itself. "OpenNI Cookbook" shows you how to start and experiment with both NIUI designs and OpenNI itself using examples.
Table of Contents (14 chapters)
OpenNI Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

About the Reviewers

Vinícius Godoy is a computer graphics university professor at PUCPR. He is also an IT manager of an Electronic Content Management (ECM) company in Brazil, called Sinax. His former experience also includes building games and applications for Positivo Informática—including building an augmented reality educational game exposed at CEBIT—and network libraries for Siemens Enterprise Communications.

In his research, he used Kinect, OpenNI, and OpenCV to recognize Brazilian sign language gestures. He is also a game development fan, having a popular website entirely dedicated to the field, called Ponto V (http://www.pontov.com.br). He is mainly proficient with the C++ and Java languages and his field of interest includes graphics synthesis, image processing, image recognition, design patterns, Internet, and multithreading applications.

Li Yang Ku is a Computer Vision scientist and the main author of the Serious Computer Vision Blog (http://computervisionblog.wordpress.com), one of the foremost Computer Vision blogs. He is also the founder of EatPaper (http://www.eatpaper.org), a free web tool for organizing publications visually.

He has worked as a researcher in HRL Laboratories, Malibu, California from 2011 to 2013. He did AI research on multiple humanoid robots and designed one of the vision systems for NASA's humanoid space robot, Robonaut 2, at NASA JSC, Houston. He also has broad experience on RGBD sensor applications, such as object recognition, object tracking, human activity classification, SLAM, and quadrotor navigation.

Li Yang Ku received his MS degree in CS from University of California, Los Angeles, and has a BS degree in EE from National Chiao Tung University, Taiwan. He is now pursuing a Ph.D. degree at the University of Massachusetts, Amherst.

Liza Roumani was born in Paris in 1989. After passing the French scientific Baccalaureate, she decided to move to Israel.

After one year in Jerusalem University, she joined the Technion Institute of Technology of Haifa, where she obtained a BSC degree in Electrical Engineering.

Liza Roumani is currently working at PrimeSense Company, the worldwide leader in 3D sensors technology.