Book Image

Learning OpenCV 3 Application Development

By : Samyak Datta
Book Image

Learning OpenCV 3 Application Development

By: Samyak Datta

Overview of this book

Computer vision and machine learning concepts are frequently used in practical computer vision based projects. If you’re a novice, this book provides the steps to build and deploy an end-to-end application in the domain of computer vision using OpenCV/C++. At the outset, we explain how to install OpenCV and demonstrate how to run some simple programs. You will start with images (the building blocks of image processing applications), and see how they are stored and processed by OpenCV. You’ll get comfortable with OpenCV-specific jargon (Mat Point, Scalar, and more), and get to know how to traverse images and perform basic pixel-wise operations. Building upon this, we introduce slightly more advanced image processing concepts such as filtering, thresholding, and edge detection. In the latter parts, the book touches upon more complex and ubiquitous concepts such as face detection (using Haar cascade classifiers), interest point detection algorithms, and feature descriptors. You will now begin to appreciate the true power of the library in how it reduces mathematically non-trivial algorithms to a single line of code! The concluding sections touch upon OpenCV’s Machine Learning module. You will witness not only how OpenCV helps you pre-process and extract features from images that are relevant to the problems you are trying to solve, but also how to use Machine Learning algorithms that work on these features to make intelligent predictions from visual data!
Table of Contents (16 chapters)
Learning OpenCV 3 Application Development
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface

AdaBoost learning


In the section, Haar features, we stated that there are 180,000 rectangle (Haar) features associated with each image sub-window. Even though each feature can be computed very efficiently with the help of an integral image, using this complete set is prohibitively expensive. In order to circumvent this predicament, a neat little trick was applied. It is reasonable to expect that not all of the 18,000 theoretically possible features within each sub-window are equally important. What if we could sample this huge set of features that we have, and for each sub-window, select a reasonably small subset of features that can help us with our classification task? If we do that, we would have a very small number of these features combined to form an effective classifier.

This is the main idea behind the technique known as AdaBoost learning. In fact, AdaBoost goes a step further and learns multiple such classifiers-each of which learn on a subset of the features. These classifiers are...