Book Image

Hands-On GPU Programming with Python and CUDA

By : Dr. Brian Tuomanen
Book Image

Hands-On GPU Programming with Python and CUDA

By: Dr. Brian Tuomanen

Overview of this book

Hands-On GPU Programming with Python and CUDA hits the ground running: you’ll start by learning how to apply Amdahl’s Law, use a code profiler to identify bottlenecks in your Python code, and set up an appropriate GPU programming environment. You’ll then see how to “query” the GPU’s features and copy arrays of data to and from the GPU’s own memory. As you make your way through the book, you’ll launch code directly onto the GPU and write full blown GPU kernels and device functions in CUDA C. You’ll get to grips with profiling GPU code effectively and fully test and debug your code using Nsight IDE. Next, you’ll explore some of the more well-known NVIDIA libraries, such as cuFFT and cuBLAS. With a solid background in place, you will now apply your new-found knowledge to develop your very own GPU-based deep neural network from scratch. You’ll then explore advanced topics, such as warp shuffling, dynamic parallelism, and PTX assembly. In the final chapter, you’ll see some topics and applications related to GPU programming that you may wish to pursue, including AI, graphics, and blockchain. By the end of this book, you will be able to apply GPU programming to problems related to data science and high-performance computing.
Table of Contents (15 chapters)

Profiling your code

We saw in the previous example that we can individually time different functions and components with the standard time function in Python. While this approach works fine for our small example program, this won't always be feasible for larger programs that call on many different functions, some of which may or may not be worth our effort to parallelize, or even optimize on the CPU. Our goal here is to find the bottlenecks and hotspots of a programeven if we were feeling energetic and used time around every function call we make, we might miss something, or there might be some system or library calls that we don't even consider that happen to be slowing things down. We should find candidate portions of the code to offload onto the GPU before we even think about rewriting the code to run on the GPU; we must always follow the wise words of the famous American computer scientist Donald Knuth: Premature optimization is the root of all evil.

We use what is known as a profiler to find these hot spots and bottlenecks in our code. A profiler will conveniently allow us to see where our program is taking the most time, and allow us to optimize accordingly.

Using the cProfile module

We will primarily be using the cProfile module to check our code. This module is a standard library function that is contained in every modern Python installation. We can run the profiler from the command line with -m cProfile, and specify that we want to organize the results by the cumulative time spent on each function with -s cumtime, and then redirect the output into a text file with the > operator.

This will work on both the Linux Bash or Windows PowerShell command line.

Let's try this now:

We can now look at the contents of the text file with our favorite text editor. Let's keep in mind that the output of the program will be included at the beginning of the file:

Now, since we didn't remove the references to time in the original example, we see their output in the first two lines at the beginning. We can then see the total number of function calls made in this program, and the cumulative amount of time to run it.

Subsequently, we have a list of functions that are called in the program, ordered from the cumulatively most time-consuming functions to the least; the first line is the program itself, while the second line is, as expected, the simple_mandelbrot function from our program. (Notice that the time here aligns with what we measured with the time command). After this, we can see many libraries and system calls that relate to dumping the Mandelbrot graph to a file, all of which take comparatively less time. We use such output from cProfile to infer where our bottlenecks are within a given program.