Book Image

OpenCV 3.x with Python By Example - Second Edition

By : Gabriel Garrido Calvo, Prateek Joshi
Book Image

OpenCV 3.x with Python By Example - Second Edition

By: Gabriel Garrido Calvo, Prateek Joshi

Overview of this book

Computer vision is found everywhere in modern technology. OpenCV for Python enables us to run computer vision algorithms in real time. With the advent of powerful machines, we have more processing power to work with. Using this technology, we can seamlessly integrate our computer vision applications into the cloud. Focusing on OpenCV 3.x and Python 3.6, this book will walk you through all the building blocks needed to build amazing computer vision applications with ease. We start off by manipulating images using simple filtering and geometric transformations. We then discuss affine and projective transformations and see how we can use them to apply cool advanced manipulations to your photos like resizing them while keeping the content intact or smoothly removing undesired elements. We will then cover techniques of object tracking, body part recognition, and object recognition using advanced techniques of machine learning such as artificial neural network. 3D reconstruction and augmented reality techniques are also included. The book covers popular OpenCV libraries with the help of examples. This book is a practical tutorial that covers various examples at different levels, teaching you about the different functions of OpenCV and their actual implementation. By the end of this book, you will have acquired the skills to use OpenCV and Python to develop real-world computer vision applications.
Table of Contents (17 chapters)
Title Page
Copyright and Credits
Contributors
Packt Upsell
Preface

Background subtraction


Background subtraction is very useful in video surveillance. Basically, the background subtraction technique performs really well for cases where we have to detect moving objects in a static scene. As the name indicates, this algorithm works by detecting the background and subtracting it from the current frame to obtain the foreground, that is, moving objects.

In order to detect moving objects, we need to build a model of the background first. This is not the same as frame differencing because we are actually modeling the background and using this model to detect moving objects. So, this performs much better than the simple frame differencing technique. This technique tries to detect static parts in the scene and then include them in the background model. So, it's an adaptive technique that can adjust according to the scene.

Let's consider the following image:

Now, as we gather more frames in this scene, every part of the image will gradually become a part of the background...