Book Image

Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA

By : Bhaumik Vaidya
Book Image

Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA

By: Bhaumik Vaidya

Overview of this book

Computer vision has been revolutionizing a wide range of industries, and OpenCV is the most widely chosen tool for computer vision with its ability to work in multiple programming languages. Nowadays, in computer vision, there is a need to process large images in real time, which is difficult to handle for OpenCV on its own. This is where CUDA comes into the picture, allowing OpenCV to leverage powerful NVDIA GPUs. This book provides a detailed overview of integrating OpenCV with CUDA for practical applications. To start with, you’ll understand GPU programming with CUDA, an essential aspect for computer vision developers who have never worked with GPUs. You’ll then move on to exploring OpenCV acceleration with GPUs and CUDA by walking through some practical examples. Once you have got to grips with the core concepts, you’ll familiarize yourself with deploying OpenCV applications on NVIDIA Jetson TX1, which is popular for computer vision and deep learning applications. The last chapters of the book explain PyCUDA, a Python library that leverages the power of CUDA and GPUs for accelerations and can be used by computer vision developers who use OpenCV with Python. By the end of this book, you’ll have enhanced computer vision applications with the help of this book's hands-on approach.
Table of Contents (15 chapters)

Constant memory

The CUDA language makes another type of memory available to the programmer, which is known as constant memory. NVIDIA hardware provides 64 KB of this constant memory, which is used to store data that remains constant throughout the execution of the kernel. This constant memory is cached on-chip so that the use of constant memory instead of global memory can speed up execution. The use of constant memory will also reduce memory bandwidth to the device's global memory. In this section, we will see how to use constant memory in CUDA programs. A simple program that performs a simple math operation, a*x + b, where a and b are constants, is taken as an example. The kernel function code for this program is shown as follows:

#include "stdio.h"
#include<iostream>
#include <cuda.h>
#include <cuda_runtime.h>

//Defining two constants
__constant__...