Book Image

OpenCV 3.x with Python By Example - Second Edition

By : Gabriel Garrido Calvo, Prateek Joshi
Book Image

OpenCV 3.x with Python By Example - Second Edition

By: Gabriel Garrido Calvo, Prateek Joshi

Overview of this book

Computer vision is found everywhere in modern technology. OpenCV for Python enables us to run computer vision algorithms in real time. With the advent of powerful machines, we have more processing power to work with. Using this technology, we can seamlessly integrate our computer vision applications into the cloud. Focusing on OpenCV 3.x and Python 3.6, this book will walk you through all the building blocks needed to build amazing computer vision applications with ease. We start off by manipulating images using simple filtering and geometric transformations. We then discuss affine and projective transformations and see how we can use them to apply cool advanced manipulations to your photos like resizing them while keeping the content intact or smoothly removing undesired elements. We will then cover techniques of object tracking, body part recognition, and object recognition using advanced techniques of machine learning such as artificial neural network. 3D reconstruction and augmented reality techniques are also included. The book covers popular OpenCV libraries with the help of examples. This book is a practical tutorial that covers various examples at different levels, teaching you about the different functions of OpenCV and their actual implementation. By the end of this book, you will have acquired the skills to use OpenCV and Python to develop real-world computer vision applications.
Table of Contents (17 chapters)
Title Page
Copyright and Credits
Contributors
Packt Upsell
Preface

How to track planar objects


Now that you understand what pose estimation is, let's see how you can use it to track planar objects. Let's consider the following planar object:

Now, if we extract feature points from this image, we will see something like this:

Let's tilt the cardboard box:

As we can see, the cardboard box is tilted in this image. Now, if we want to make sure our virtual object is overlaid on top of this surface, we need to gather this planar tilt information. One way to do this is by using the relative positions of the feature points. If we extract the feature points from the preceding image, it will look like this:

As you can see, the feature points got closer horizontally on the far end of the plane as compared to the ones on the near end:

So, we can utilize this information to extract the orientation information from the image. If you remember, we discussed perspective transformation in detail when we were discussing geometric transformations, as well as panoramic imaging. All...