The navigation stack needs to know the position of the sensors, wheels, and joints.
To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy.
Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us.
If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link
, we would need to add a new frame to the transformation tree with these offsets.
Once inserted and created, we could easily know the position of the laser with regard to the base_link
value or the wheels. The only thing we need to do is call the TF
library and get the transformation.