Book Image

Hands-On ROS for Robotics Programming

By : Bernardo Ronquillo Japón
Book Image

Hands-On ROS for Robotics Programming

By: Bernardo Ronquillo Japón

Overview of this book

Connecting a physical robot to a robot simulation using the Robot Operating System (ROS) infrastructure is one of the most common challenges faced by ROS engineers. With this book, you'll learn how to simulate a robot in a virtual environment and achieve desired behavior in equivalent real-world scenarios. This book starts with an introduction to GoPiGo3 and the sensors and actuators with which it is equipped. You'll then work with GoPiGo3's digital twin by creating a 3D model from scratch and running a simulation in ROS using Gazebo. Next, the book will show you how to use GoPiGo3 to build and run an autonomous mobile robot that is aware of its surroundings. Finally, you'll find out how a robot can learn tasks that have not been programmed in the code but are acquired by observing its environment. You'll even cover topics such as deep learning and reinforcement learning. By the end of this robot programming book, you'll be well-versed with the basics of building specific-purpose applications in robotics and developing highly intelligent autonomous robots from scratch.
Table of Contents (19 chapters)
1
Section 1: Physical Robot Assembly and Testing
5
Section 2: Robot Simulation with Gazebo
8
Section 3: Autonomous Navigation Using SLAM
13
Section 4: Adaptive Robot Behavior Using Machine Learning

Questions

  1. How does an agent learn following the RL approach?

A) Via the experience that it gets from the reward it receives each time it executes an action.
B) By randomly exploring the environment and discovering the best strategy by trial and error.
C) Via a neural network that gives as output a q-value as a function of the state of the system.

  1. Does an agent trained with RL have to make predictions of the expected outcome of an action?

A) Yes; this is a characteristic called model-free RL.
B) Only if it does not take the model-free RL approach.
C) No; by definition, RL methods only need to be aware of rewards and penalties to ensure the learning process.

  1. If you run the Q-learning algorithm with a learning rate, alpha, of 0.7, what does this mean from the point of view of the learning process?

A) That you keep the top 30% of the pair state-actions that provide the higher...