Book Image

Hands-On Vision and Behavior for Self-Driving Cars

By : Luca Venturi, Krishtof Korda
Book Image

Hands-On Vision and Behavior for Self-Driving Cars

By: Luca Venturi, Krishtof Korda

Overview of this book

The visual perception capabilities of a self-driving car are powered by computer vision. The work relating to self-driving cars can be broadly classified into three components - robotics, computer vision, and machine learning. This book provides existing computer vision engineers and developers with the unique opportunity to be associated with this booming field. You will learn about computer vision, deep learning, and depth perception applied to driverless cars. The book provides a structured and thorough introduction, as making a real self-driving car is a huge cross-functional effort. As you progress, you will cover relevant cases with working code, before going on to understand how to use OpenCV, TensorFlow and Keras to analyze video streaming from car cameras. Later, you will learn how to interpret and make the most of lidars (light detection and ranging) to identify obstacles and localize your position. You’ll even be able to tackle core challenges in self-driving cars such as finding lanes, detecting pedestrian and crossing lights, performing semantic segmentation, and writing a PID controller. By the end of this book, you’ll be equipped with the skills you need to write code for a self-driving car running in a driverless car simulator, and be able to tackle various challenges faced by autonomous car engineers.
Table of Contents (17 chapters)
1
Section 1: OpenCV and Sensors and Signals
5
Section 2: Improving How the Self-Driving Car Works with Deep Learning and Neural Networks
12
Section 3: Mapping and Controls

SLAM with an Ouster lidar and Google Cartographer

This is the moment you have been waiting for: building maps with hands-on experience using Cartographer and an Ouster lidar sensor!

An Ouster lidar was chosen for this hands-on example because it has a built-in IMU, which is needed to perform SLAM. This means that you don't need to purchase another sensor to provide the inertial data.

The example you will see is the offline processing of data collected from an Ouster sensor and is adapted from the work of Wil Selby. Please visit Wil Selby's website home page for more cool projects and ideas: https://www.wilselby.com/.

Selby also has a related project that performs the SLAM online (in real time) for a DIY driverless car in ROS: https://github.com/wilselby/diy_driverless_car_ROS.

Ouster sensor

You can learn more about the Ouster data format and usage of the sensor from the OS1 user guide:

https://github.com/PacktPublishing/Hands-On-Vision-and-Behavior-for-Self...