Book Image

Learn OpenCV 4 By Building Projects - Second Edition

By : David Millán Escrivá, Vinícius G. Mendonça, Prateek Joshi
Book Image

Learn OpenCV 4 By Building Projects - Second Edition

By: David Millán Escrivá, Vinícius G. Mendonça, Prateek Joshi

Overview of this book

OpenCV is one of the best open source libraries available, and can help you focus on constructing complete projects on image processing, motion detection, and image segmentation. Whether you’re completely new to computer vision, or have a basic understanding of its concepts, Learn OpenCV 4 by Building Projects – Second edition will be your guide to understanding OpenCV concepts and algorithms through real-world examples and projects. You’ll begin with the installation of OpenCV and the basics of image processing. Then, you’ll cover user interfaces and get deeper into image processing. As you progress through the book, you'll learn complex computer vision algorithms and explore machine learning and face detection. The book then guides you in creating optical flow video analysis and background subtraction in complex scenes. In the concluding chapters, you'll also learn about text segmentation and recognition and understand the basics of the new and improved deep learning module. By the end of this book, you'll be familiar with the basics of Open CV, such as matrix operations, filters, and histograms, and you'll have mastered commonly used computer vision techniques to build OpenCV projects from scratch.
Table of Contents (14 chapters)

Feature-based tracking

Feature-based tracking refers to tracking individual feature points across successive frames in the video. The advantage here is that we don't have to detect feature points in every single frame. We can just detect them once and keep tracking them after that. This is more efficient than running the detector on every frame. We use a technique called optical flow to track these features. Optical flow is one of the most popular techniques in computer vision. We choose a bunch of feature points and track them through the video stream. When we detect the feature points, we compute the displacement vectors and show the motion of those keypoints between consecutive frames. These vectors are called motion vectors. A motion vector for a particular point is basically just a directional line indicating where that point has moved, as compared to the previous frame...