The Leap device uses a pair of cameras and an infrared pattern projected by LEDs to generate an image of your hands with depth information. A very small amount of processing is done on the device itself, in order to keep the cost of the units low.
The images are post-processed on your computer to remove noise, and to construct a model of your hands, fingers, and pointy tools that you are holding.
As an application developer, you can make use of this data via the Leap software developer kit, which contains a powerful high-level API for easily integrating gesture input into your applications. Because developers do not want to go to, the trouble of processing raw input in the form of depth-mapped images skeleton models and point cloud data, the SDK provides abstracted models that report what your user is doing with their hands. With the SDK you can write applications that make use of some familiar concepts:
All hands detected in a frame, including rotation, position, velocity, and movement since an earlier frame
All fingers and pointy tools (collectively known as "pointables") recognized as attached to each hand, with rotation, position, and velocity
The exact pixel location on a display pointed at by a finger or tool
Basic recognition of gestures such as swipes and taps
Detection of position and orientation changes between frames