Book Image

Mastering ROS for Robotics Programming - Second Edition

By : Jonathan Cacace, Lentin Joseph
Book Image

Mastering ROS for Robotics Programming - Second Edition

By: Jonathan Cacace, Lentin Joseph

Overview of this book

In this day and age, robotics has been gaining a lot of traction in various industries where consistency and perfection matter. Automation is achieved via robotic applications and various platforms that support robotics. The Robot Operating System (ROS) is a modular software platform to develop generic robotic applications. This book focuses on the most stable release of ROS (Kinetic Kame), discusses advanced concepts, and effectively teaches you programming using ROS. We begin with aninformative overview of the ROS framework, which will give you a clear idea of how ROS works. During the course of this book, you’ll learn to build models of complex robots, and simulate and interface the robot using the ROS MoveIt! motion planning library and ROS navigation stacks. Learn to leverage several ROS packages to embrace your robot models. After covering robot manipulation and navigation, you’ll get to grips with the interfacing I/O boards, sensors, and actuators of ROS. Vision sensors are a key component of robots, and an entire chapter is dedicated to the vision sensor and image elaboration, its interface in ROS and programming. You’ll also understand the hardware interface and simulation of complex robots to ROS and ROS Industrial. At the end of this book, you’ll discover the best practices to follow when programming using ROS.
Table of Contents (22 chapters)
Title Page
Copyright and Credits
www.PacktPub.com
Contributors
Preface
Index

Working with perception using MoveIt! and Gazebo


Until now, in MoveIt!, we have worked with an arm only. In this section, we will see how to interface a 3D vision sensor data to MoveIt!. The sensor can be either simulated using Gazebo, or you can directly interface an RGB-D sensor, such as Kinect or Xtion Pro, using the openni_launch package. Here, we will work using Gazebo simulation. We will add sensors to MoveIt! for vision-assisted pick-and-place. We will create a grasp table and a grasp object in Gazebo for the pick-and-place operation. We will add two custom models called Grasp_Object and Grasp_Table. The sample models are placed into the seven_dof_arm_test package in the model directory, and should be copied to the ~/.gazebo/models folder for accessing the models from Gazebo. The following command will launch the robot arm and the Asus Xtion pro simulation in Gazebo:

$ roslaunch seven_dof_arm_gazebo seven_dof_arm_bringup_grasping.launch

This command will open up Gazebo with arm joint...