Book Image

ROS Programming: Building Powerful Robots

By : Anil Mahtani, Aaron Martinez, Enrique Fernandez Perdomo, Luis Sánchez, Lentin Joseph
Book Image

ROS Programming: Building Powerful Robots

By: Anil Mahtani, Aaron Martinez, Enrique Fernandez Perdomo, Luis Sánchez, Lentin Joseph

Overview of this book

This learning path is designed to help you program and build your robots using open source ROS libraries and tools. We start with the installation and basic concepts, then continue with the more complex modules available in ROS, such as sensor and actuator integration (drivers), navigation and mapping (so you can create an autonomous mobile robot), manipulation, computer vision, perception in 3D with PCL, and more. We then discuss advanced concepts in robotics and how to program using ROS. You'll get a deep overview of the ROS framework, which will give you a clear idea of how ROS really works. During the course of the book, you will learn how to build models of complex robots, and simulate and interface the robot using the ROS MoveIt motion planning library and ROS navigation stacks. We'll go through great projects such as building a self-driving car, an autonomous mobile robot, and image recognition using deep learning and ROS. You can find beginner, intermediate, and expert ROS robotics applications inside! It includes content from the following Packt products: ? Effective Robotics Programming with ROS - Third Edition ? Mastering ROS for Robotics Programming ? ROS Robotics Projects
Table of Contents (37 chapters)
Title page
Copyright and Credits
Packt Upsell
Preface
Bibliography
Index

Working with TurtleBot simulation in VR


We can start a TurtleBot simulation using the following command:

$ roslaunch turtlebot_gazebo turtlebot_playground.launch

You will get the TurtleBot simulation in Gazebo like this:

Figure 17: TurtleBot simulation in Gazebo

You can move the robot by launching the teleop node with the following command:

$ roslaunch turtlebot_teleop keyboard_teleop.launch

You can now move the robot using the keyboard. Launch the app again and connect to the ROS master running on the PC. Then, you can remap the Gazebo RGB image compressed data into an app image topic, like this:

$ rosrun topic_tools relay /camera/rgb/image_raw/compressed /usb_cam/image_raw/compressed

Now, what happens is that the robot camera image is visualized in the app, and if you put the phone into a VR headset, it will simulate a 3D environment. The following screenshot shows the split view of the images from Gazebo:

Figure 18: Gazebo image view in ROS-VR app

You can move the robot using a keyboard as of now...