We will build on the OpenGL framework discussed in the previous chapter for point cloud rendering in this section. The texture mapping technique introduced in the previous chapter can also be applied in the point cloud format. Basically, the depth sensor provides a set of vertices in real-world space (the depth map), and the color camera provides us with the color information of the vertices. UV mapping is a simple lookup table once the depth map and color camera are calibrated.
Readers should use the raw data provided for the subsequent demo or obtain their own raw data from a 3D range-sensing camera. In either case, we assume these filenames will be used to denote the raw data files: depth_frame0.bin
and color_frame0.bin
.