Book Image

Learning ROS for Robotics Programming

By : Aaron Martinez, Enrique Fernández
Book Image

Learning ROS for Robotics Programming

By: Aaron Martinez, Enrique Fernández

Overview of this book

<p>Both the amateur and the professional roboticist who has ever tried their hand at robotics programming will have faced with the cumbersome task of starting from scratch, usually reinventing the wheel. ROS comes with a great number of already working functionalities, and this book takes you from the first steps to the most elaborate designs possible within this software framework.</p> <p>"Learning ROS for Robotics Programming" is full of practical examples that will help you to understand the framework from the very beginning. Build your own robot applications in a simulated environment and share your knowledge with the large community supporting ROS.</p> <p>"Learning ROS for Robotics Programming" starts with the basic concepts and usage of ROS in a very straightforward and practical manner. It is a painless introduction to the fascinating world of robotics, covering sensor integration, modeling, simulation, computer vision, and navigation algorithms, among other topics.</p> <p>After the first two chapters, concepts like topics, messages, and nodes will become daily bread. Make your robot see with HD cameras, or navigate avoiding obstacles with range sensors. Furthermore, thanks to the contributions of the vast ROS community, your robot will be able to navigate autonomously, and even recognize and interact with you, in a matter of minutes.</p> <p>"Learning ROS for Robotics Programming" will give you all the background you need to know in order to start in the fascinating world of robotics and program your own robot. Simply, you put the limit!</p>
Table of Contents (16 chapters)
Learning ROS for Robotics Programming
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Creating transforms


The navigation stack needs to know the position of the sensors, wheels, and joints.

To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy.

Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us.

If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link, we would need to add a new frame to the transformation tree with these offsets.

Once inserted and created, we could easily know the position of the laser with regard to the base_link value or the wheels. The only thing we need to do is call the TF library and get the transformation.

Creating a broadcaster

Let's test it with a simple code. Create a new file in chapter7_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside...