Book Image

Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA

By : Bhaumik Vaidya
Book Image

Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA

By: Bhaumik Vaidya

Overview of this book

Computer vision has been revolutionizing a wide range of industries, and OpenCV is the most widely chosen tool for computer vision with its ability to work in multiple programming languages. Nowadays, in computer vision, there is a need to process large images in real time, which is difficult to handle for OpenCV on its own. This is where CUDA comes into the picture, allowing OpenCV to leverage powerful NVDIA GPUs. This book provides a detailed overview of integrating OpenCV with CUDA for practical applications. To start with, you’ll understand GPU programming with CUDA, an essential aspect for computer vision developers who have never worked with GPUs. You’ll then move on to exploring OpenCV acceleration with GPUs and CUDA by walking through some practical examples. Once you have got to grips with the core concepts, you’ll familiarize yourself with deploying OpenCV applications on NVIDIA Jetson TX1, which is popular for computer vision and deep learning applications. The last chapters of the book explain PyCUDA, a Python library that leverages the power of CUDA and GPUs for accelerations and can be used by computer vision developers who use OpenCV with Python. By the end of this book, you’ll have enhanced computer vision applications with the help of this book's hands-on approach.
Table of Contents (15 chapters)

Object tracking using background subtraction

Background subtraction is the process of separating out foreground objects from the background in a sequence of video frames. It is widely used in object detection and tracking applications to remove the background part. Background subtraction is performed in four steps:

  1. Image preprocessing
  2. Modeling of background
  3. Detection of foreground
  4. Data validation

Image preprocessing is always performed to remove any kind of noise present in the image. The second step is to model the background so that it can be separated from the foreground. In some applications, the first frame of the video is taken as the background and it is not updated. The absolute difference between each frame and the first frame is taken to separate foreground from background.

In other techniques, the background is modeled by taking an average or median of all the frames...