Book Image

Artificial Vision and Language Processing for Robotics

By : Álvaro Morena Alberola, Gonzalo Molina Gallego, Unai Garay Maestre
Book Image

Artificial Vision and Language Processing for Robotics

By: Álvaro Morena Alberola, Gonzalo Molina Gallego, Unai Garay Maestre

Overview of this book

Artificial Vision and Language Processing for Robotics begins by discussing the theory behind robots. You'll compare different methods used to work with robots and explore computer vision, its algorithms, and limits. You'll then learn how to control the robot with natural language processing commands. You'll study Word2Vec and GloVe embedding techniques, non-numeric data, recurrent neural network (RNNs), and their advanced models. You'll create a simple Word2Vec model with Keras, as well as build a convolutional neural network (CNN) and improve it with data augmentation and transfer learning. You'll study the ROS and build a conversational agent to manage your robot. You'll also integrate your agent with the ROS and convert an image to text and text to speech. You'll learn to build an object recognition system using a video. By the end of this book, you'll have the skills you need to build a functional application that can integrate with a ROS to extract useful information about your environment.
Table of Contents (12 chapters)
Artificial Vision and Language Processing for Robotics
Preface

State-of-the-Art Models - Transfer Learning


Humans do not learn each and every task that they want to achieve from scratch; they usually take previous knowledge as a base in order to learn tasks much faster.

When training neural networks, there are some tasks that are extremely expensive to train for every individual, such as having hundreds of thousands of images for training and having to distinguish between two or more similar objects, ending up having a cost of days to achieve good performance, for example. These neural networks are trained to achieve this expensive task, and because neural networks are capable of saving that knowledge, then other models can take advantage of those weights to retrain specific models for similar tasks.

Transfer learning does just that – it transfers the knowledge of a pretrained model to your model, so you can take advantage of that knowledge.

So, for example, if you want to make a classifier that is capable of identifying five objects but that task seems...