Book Image

Effective Robotics Programming with ROS - Third Edition

By : Anil Mahtani, Luis Sánchez, Aaron Martinez, Enrique Fernandez Perdomo
Book Image

Effective Robotics Programming with ROS - Third Edition

By: Anil Mahtani, Luis Sánchez, Aaron Martinez, Enrique Fernandez Perdomo

Overview of this book

Building and programming a robot can be cumbersome and time-consuming, but not when you have the right collection of tools, libraries, and more importantly expert collaboration. ROS enables collaborative software development and offers an unmatched simulated environment that simplifies the entire robot building process. This book is packed with hands-on examples that will help you program your robot and give you complete solutions using open source ROS libraries and tools. It also shows you how to use virtual machines and Docker containers to simplify the installation of Ubuntu and the ROS framework, so you can start working in an isolated and control environment without changing your regular computer setup. It starts with the installation and basic concepts, then continues with more complex modules available in ROS such as sensors and actuators integration (drivers), navigation and mapping (so you can create an autonomous mobile robot), manipulation, Computer Vision, perception in 3D with PCL, and more. By the end of the book, you’ll be able to leverage all the ROS Kinetic features to build a fully fledged robot for all your needs.
Table of Contents (18 chapters)
Effective Robotics Programming with ROS Third Edition
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
Index

Creating transforms


The navigation stack needs to know the position of the sensors, wheels, and joints.

To do that, we use the Transform Frames (tf) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy.

Thanks to tf, we can add more sensors and parts to the robot, and tf will handle all the relations for us.

If we put the laser 10 cm backwards and 20 cm above with reference to the origin of the base_link coordinates, we would need to add a new frame to the transformation tree with these offsets.

Once inserted and created, we could easily know the position of the laser with reference to the base_link value or the wheels. The only thing we need to do is call the tf library and get the transformation.

Creating a broadcaster

Let's test this with a simple code. Create a new file in chapter5_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside it:

#include &lt...