Book Image

Leap Motion Development Essentials

By : Mischa Spiegelmock
Book Image

Leap Motion Development Essentials

By: Mischa Spiegelmock

Overview of this book

Leap Motion is a company developing advanced motion sensing technology for human–computer interaction. Originally inspired by the level of difficulty of using a mouse and keyboard for 3D modeling, Leap Motion believe that moulding virtual clay should be as easy as moulding clay in your hands. Leap Motion now focus on bringing this motion sensing technology closer to the real world. Leap Motion Development Essentials explains the concepts and practical applications of gesture input for developers who want to take full advantage of Leap Motion technology. This guide explores the capabilities available to developers and gives you a clear overview of topics related to gesture input along with usable code samples. Leap Motion Development Essentials shows you everything you need to know about the Leap Motion SDK, from creating a working program with gesture input to more sophisticated applications covering a range of relevant topics. Sample code is provided and explained along with details of the most important and central API concepts. This book teaches you the essential information you need to design a gesture-enabled interface for your application, from specific gesture detection to best practices for this new input. You will be given guidance on practical considerations along with copious runnable demonstrations of API usage which are explained in step-by-step, reusable recipes.
Table of Contents (12 chapters)

An overview of the SDK


The Leap device uses a pair of cameras and an infrared pattern projected by LEDs to generate an image of your hands with depth information. A very small amount of processing is done on the device itself, in order to keep the cost of the units low.

The images are post-processed on your computer to remove noise, and to construct a model of your hands, fingers, and pointy tools that you are holding.

As an application developer, you can make use of this data via the Leap software developer kit, which contains a powerful high-level API for easily integrating gesture input into your applications. Because developers do not want to go to, the trouble of processing raw input in the form of depth-mapped images skeleton models and point cloud data, the SDK provides abstracted models that report what your user is doing with their hands. With the SDK you can write applications that make use of some familiar concepts:

  • All hands detected in a frame, including rotation, position, velocity, and movement since an earlier frame

  • All fingers and pointy tools (collectively known as "pointables") recognized as attached to each hand, with rotation, position, and velocity

  • The exact pixel location on a display pointed at by a finger or tool

  • Basic recognition of gestures such as swipes and taps

  • Detection of position and orientation changes between frames