Book Image

OpenCV 3.x with Python By Example - Second Edition

By : Gabriel Garrido Calvo, Prateek Joshi
Book Image

OpenCV 3.x with Python By Example - Second Edition

By: Gabriel Garrido Calvo, Prateek Joshi

Overview of this book

Computer vision is found everywhere in modern technology. OpenCV for Python enables us to run computer vision algorithms in real time. With the advent of powerful machines, we have more processing power to work with. Using this technology, we can seamlessly integrate our computer vision applications into the cloud. Focusing on OpenCV 3.x and Python 3.6, this book will walk you through all the building blocks needed to build amazing computer vision applications with ease. We start off by manipulating images using simple filtering and geometric transformations. We then discuss affine and projective transformations and see how we can use them to apply cool advanced manipulations to your photos like resizing them while keeping the content intact or smoothly removing undesired elements. We will then cover techniques of object tracking, body part recognition, and object recognition using advanced techniques of machine learning such as artificial neural network. 3D reconstruction and augmented reality techniques are also included. The book covers popular OpenCV libraries with the help of examples. This book is a practical tutorial that covers various examples at different levels, teaching you about the different functions of OpenCV and their actual implementation. By the end of this book, you will have acquired the skills to use OpenCV and Python to develop real-world computer vision applications.
Table of Contents (17 chapters)
Title Page
Copyright and Credits
Contributors
Packt Upsell
Preface

What is a visual dictionary?


We will be using the Bag of Words model to build our object recognizer. Each image is represented as a histogram of visual words. These visual words are basically the N centroids built using all the keypoints extracted from training images. The pipeline is as shown in the image that follows:

From each training image, we detect a set of keypoints and extract features for each of those keypoints. Every image will give rise to a different number of keypoints. In order to train a classifier, each image must be represented using a fixed length feature vector. This feature vector is merely a histogram, where each bin corresponds to a visual word.

When we extract all the features from all the keypoints in the training images, we perform K-means clustering and extract N centroids. This N is the length of the feature vector of a given image. Each image will now be represented as a histogram, where each bin corresponds to one of the N centroids. For simplicity, let's say...