Book Image

Hands-On GPU Programming with Python and CUDA

By : Dr. Brian Tuomanen
Book Image

Hands-On GPU Programming with Python and CUDA

By: Dr. Brian Tuomanen

Overview of this book

Hands-On GPU Programming with Python and CUDA hits the ground running: you’ll start by learning how to apply Amdahl’s Law, use a code profiler to identify bottlenecks in your Python code, and set up an appropriate GPU programming environment. You’ll then see how to “query” the GPU’s features and copy arrays of data to and from the GPU’s own memory. As you make your way through the book, you’ll launch code directly onto the GPU and write full blown GPU kernels and device functions in CUDA C. You’ll get to grips with profiling GPU code effectively and fully test and debug your code using Nsight IDE. Next, you’ll explore some of the more well-known NVIDIA libraries, such as cuFFT and cuBLAS. With a solid background in place, you will now apply your new-found knowledge to develop your very own GPU-based deep neural network from scratch. You’ll then explore advanced topics, such as warp shuffling, dynamic parallelism, and PTX assembly. In the final chapter, you’ll see some topics and applications related to GPU programming that you may wish to pursue, including AI, graphics, and blockchain. By the end of this book, you will be able to apply GPU programming to problems related to data science and high-performance computing.
Table of Contents (15 chapters)

What this book covers

Chapter 1, Why GPU Programming?, gives us some motivations as to why we should learn this field, and how to apply Amdahl's Law to estimate potential performance improvements from translating a serial program to making use of a GPU.

Chapter 2, Setting Up Your GPU Programming Environment, explains how to set up an appropriate Python and C++ development environment for CUDA under both Windows and Linux.

Chapter 3, Getting Started with PyCUDA, shows the most essential skills we will need for programming GPUs from Python. We will notably see how to transfer data to and from a GPU using PyCUDA's gpuarray class, and how to compile simple CUDA kernels with PyCUDA's ElementwiseKernel function.

Chapter 4, Kernels, Threads, Blocks, and Grids, teaches the fundamentals of writing effective CUDA kernels, which are parallel functions that are launched on the GPU. We will see how to write CUDA device functions ("serial" functions called directly by CUDA kernels), and learn about CUDA's abstract grid/block structure and the role it plays in launching kernels.

Chapter 5, Streams, Events, Contexts, and Concurrency, covers the notion of CUDA Streams, which is a feature that allows us to launch and synchronize many kernels onto a GPU concurrently. We will see how to use CUDA Events to time kernel launches, and how to create and use CUDA Contexts.

Chapter 6, Debugging and Profiling Your CUDA Code, fill in some of the gaps we have in terms of pure CUDA C programming, and shows us how to use the NVIDIA Nsight IDE for debugging and development, as well as how to use the NVIDIA profiling tools.

Chapter 7, Using the CUDA Libraries with Scikit-CUDA, gives us a brief tour of some of the important standard CUDA libraries by way of the Python Scikit-CUDA module, including cuBLAS, cuFFT, and cuSOLVER.

Chapter 8, The CUDA Device Function Libraries and Thrust, shows us how to use the cuRAND and CUDA Math API libraries in our code, as well as how to use CUDA Thrust C++ containers.

Chapter 9, Implementation of a Deep Neural Network, serves as a capstone in which we learn how to build an entire deep neural network from scratch, applying many of the ideas we have learned in the text.

Chapter 10, Working with Compiled GPU Code, shows us how to interface our Python code with pre-compiled GPU code, using both PyCUDA and Ctypes.

Chapter 11, Performance Optimization in CUDA, teaches some very low-level performance optimization tricks, especially in relation to CUDA, such as warp shuffling, vectorized memory access, using inline PTX assembly, and atomic operations.

Chapter 12, Where to Go from Here, is an overview of some of the educational and career paths you will have that will build upon your now-solid foundation in GPU programming.