Book Image

OpenCV 3.x with Python By Example - Second Edition

By : Gabriel Garrido Calvo, Prateek Joshi
Book Image

OpenCV 3.x with Python By Example - Second Edition

By: Gabriel Garrido Calvo, Prateek Joshi

Overview of this book

Computer vision is found everywhere in modern technology. OpenCV for Python enables us to run computer vision algorithms in real time. With the advent of powerful machines, we have more processing power to work with. Using this technology, we can seamlessly integrate our computer vision applications into the cloud. Focusing on OpenCV 3.x and Python 3.6, this book will walk you through all the building blocks needed to build amazing computer vision applications with ease. We start off by manipulating images using simple filtering and geometric transformations. We then discuss affine and projective transformations and see how we can use them to apply cool advanced manipulations to your photos like resizing them while keeping the content intact or smoothly removing undesired elements. We will then cover techniques of object tracking, body part recognition, and object recognition using advanced techniques of machine learning such as artificial neural network. 3D reconstruction and augmented reality techniques are also included. The book covers popular OpenCV libraries with the help of examples. This book is a practical tutorial that covers various examples at different levels, teaching you about the different functions of OpenCV and their actual implementation. By the end of this book, you will have acquired the skills to use OpenCV and Python to develop real-world computer vision applications.
Table of Contents (17 chapters)
Title Page
Copyright and Credits
Contributors
Packt Upsell
Preface

Feature-based tracking


Feature-based tracking refers to tracking individual feature points across successive frames in the video. We use a technique called optical flow to track these features. Optical flow is one of the most popular techniques in computer vision. We choose a bunch of feature points and track them through the video stream.

When we detect the feature points, we compute the displacement vectors and show the motion of those keypoints between consecutive frames. These vectors are called motion vectors. There are many ways to do this, but the Lucas-Kanade method is perhaps the most popular of all these techniques. You can learn more in the official OpenCV doc, at https://docs.opencv.org/3.2.0/d7/d8b/tutorial_py_lucas_kanade.html.

We start the process by extracting the feature points. For each feature point, we create 3x3 patches with the feature point in the center. The assumption here is that all the points within each patch will have a similar motion. We can adjust the size of...