Book Image

Hands-On Vision and Behavior for Self-Driving Cars

By : Luca Venturi, Krishtof Korda
Book Image

Hands-On Vision and Behavior for Self-Driving Cars

By: Luca Venturi, Krishtof Korda

Overview of this book

The visual perception capabilities of a self-driving car are powered by computer vision. The work relating to self-driving cars can be broadly classified into three components - robotics, computer vision, and machine learning. This book provides existing computer vision engineers and developers with the unique opportunity to be associated with this booming field. You will learn about computer vision, deep learning, and depth perception applied to driverless cars. The book provides a structured and thorough introduction, as making a real self-driving car is a huge cross-functional effort. As you progress, you will cover relevant cases with working code, before going on to understand how to use OpenCV, TensorFlow and Keras to analyze video streaming from car cameras. Later, you will learn how to interpret and make the most of lidars (light detection and ranging) to identify obstacles and localize your position. You’ll even be able to tackle core challenges in self-driving cars such as finding lanes, detecting pedestrian and crossing lights, performing semantic segmentation, and writing a PID controller. By the end of this book, you’ll be equipped with the skills you need to write code for a self-driving car running in a driverless car simulator, and be able to tackle various challenges faced by autonomous car engineers.
Table of Contents (17 chapters)
1
Section 1: OpenCV and Sensors and Signals
5
Section 2: Improving How the Self-Driving Car Works with Deep Learning and Neural Networks
12
Section 3: Mapping and Controls

Summary

Wow, you have come a long way in this chapter and book. You began with nothing but a mobile phone and a blue GPS dot. You traveled across the globe to Russia and found the life-juice at the Monkey Face. You grabbed some snacks by SLAMing your way through your Cimmerian dark home. You learned the difference between maps and localization, and the various types of each. You picked up some open source tools and lashed them to your adventure belt for future use.

You also learned how to apply the open source Cartographer on Ouster OS1-128 lidar sensor data, coupled with the built-in IMU to generate dense and tangible maps of some really nice townhomes that you manipulated using CloudCompare. Now you know how to create maps and can go out and map your own spaces and localize within them! The world is your Ouster (pardon me, oyster)! We can't wait to see what you build next with your creativity and knowhow!

We really hope that you enjoyed learning with us; we certainly enjoyed...