Book Image

Hands-On ROS for Robotics Programming

By : Bernardo Ronquillo Japón
Book Image

Hands-On ROS for Robotics Programming

By: Bernardo Ronquillo Japón

Overview of this book

Connecting a physical robot to a robot simulation using the Robot Operating System (ROS) infrastructure is one of the most common challenges faced by ROS engineers. With this book, you'll learn how to simulate a robot in a virtual environment and achieve desired behavior in equivalent real-world scenarios. This book starts with an introduction to GoPiGo3 and the sensors and actuators with which it is equipped. You'll then work with GoPiGo3's digital twin by creating a 3D model from scratch and running a simulation in ROS using Gazebo. Next, the book will show you how to use GoPiGo3 to build and run an autonomous mobile robot that is aware of its surroundings. Finally, you'll find out how a robot can learn tasks that have not been programmed in the code but are acquired by observing its environment. You'll even cover topics such as deep learning and reinforcement learning. By the end of this robot programming book, you'll be well-versed with the basics of building specific-purpose applications in robotics and developing highly intelligent autonomous robots from scratch.
Table of Contents (19 chapters)
1
Section 1: Physical Robot Assembly and Testing
5
Section 2: Robot Simulation with Gazebo
8
Section 3: Autonomous Navigation Using SLAM
13
Section 4: Adaptive Robot Behavior Using Machine Learning

Training GoPiGo3 to reach a target location while avoiding obstacles

Prior to running training in the scenario, we should note the adjustment of a parameter that dramatically affects the computational cost. This is the horizontal sampling of the LDS, since the state of the robot is characterized by the set of range values in a given step of the simulation. In previous chapters, when we performed navigation in Gazebo, we used a sampling rate of 720 for LDS. This means that we have circumferential range measurements at 1º resolution.

For this example of reinforcement learning, we are reducing the sampling to 24, which means a range resolution of 15º. The positive aspect of this decision is that you reduce the state vector from 360 items to 24, which is a factor of 15. You may have guessed that this will make the simulation more computationally efficient. In contrast, you...