Book Image

Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA

By : Bhaumik Vaidya
Book Image

Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA

By: Bhaumik Vaidya

Overview of this book

Computer vision has been revolutionizing a wide range of industries, and OpenCV is the most widely chosen tool for computer vision with its ability to work in multiple programming languages. Nowadays, in computer vision, there is a need to process large images in real time, which is difficult to handle for OpenCV on its own. This is where CUDA comes into the picture, allowing OpenCV to leverage powerful NVDIA GPUs. This book provides a detailed overview of integrating OpenCV with CUDA for practical applications. To start with, you’ll understand GPU programming with CUDA, an essential aspect for computer vision developers who have never worked with GPUs. You’ll then move on to exploring OpenCV acceleration with GPUs and CUDA by walking through some practical examples. Once you have got to grips with the core concepts, you’ll familiarize yourself with deploying OpenCV applications on NVIDIA Jetson TX1, which is popular for computer vision and deep learning applications. The last chapters of the book explain PyCUDA, a Python library that leverages the power of CUDA and GPUs for accelerations and can be used by computer vision developers who use OpenCV with Python. By the end of this book, you’ll have enhanced computer vision applications with the help of this book's hands-on approach.
Table of Contents (15 chapters)

CUDA streams

We have seen that the GPU provides a great performance improvement in data parallelism when a single instruction operates on multiple data items. We have not seen task parallelism where more than one kernel function, which are independent of each other, operate in parallel. For example, one function may be computing pixel values while another function is downloading something from the internet. We know that the CPU provides a very flexible method for this kind of task parallelism. The GPU also provides this capability, but it is not as flexible as the CPU. This task parallelism is achieved by using CUDA streams, which we will see in detail in this section.

A CUDA stream is nothing but a queue of GPU operations that execute in a specific order. These functions include kernel functions, memory copy operations, and CUDA event operations. The order in which they are...