Book Image

Artificial Vision and Language Processing for Robotics

By : Álvaro Morena Alberola, Gonzalo Molina Gallego, Unai Garay Maestre
Book Image

Artificial Vision and Language Processing for Robotics

By: Álvaro Morena Alberola, Gonzalo Molina Gallego, Unai Garay Maestre

Overview of this book

Artificial Vision and Language Processing for Robotics begins by discussing the theory behind robots. You'll compare different methods used to work with robots and explore computer vision, its algorithms, and limits. You'll then learn how to control the robot with natural language processing commands. You'll study Word2Vec and GloVe embedding techniques, non-numeric data, recurrent neural network (RNNs), and their advanced models. You'll create a simple Word2Vec model with Keras, as well as build a convolutional neural network (CNN) and improve it with data augmentation and transfer learning. You'll study the ROS and build a conversational agent to manage your robot. You'll also integrate your agent with the ROS and convert an image to text and text to speech. You'll learn to build an object recognition system using a video. By the end of this book, you'll have the skills you need to build a functional application that can integrate with a ROS to extract useful information about your environment.
Table of Contents (12 chapters)
Artificial Vision and Language Processing for Robotics
Preface

Summary


We have now achieved the objective of this book and built an end-to-end application for a robot. This has only been an example application; however, you could use the techniques that you learned during this book to build other applications for robotics. In this chapter, you also learned how to install and work with Darknet and YOLO. You worked through evaluating objects using AI and integrating YOLO and ROS to enable your virtual robot to predict objects.

You have learned how to control the robot with natural language processing commands, along with studying various models in this book, such as Word2Vec, GloVe embedding techniques, and non-numeric data. After this, you worked with ROS and built a conversational agent to manage your virtual robot. You developed the skills needed to build a functional application that could integrate with ROS to extract useful information about your environment. You worked with tools that are not only useful for robotics; you can use artificial vision...