Book Image

Hands-On GPU Programming with Python and CUDA

By : Dr. Brian Tuomanen
Book Image

Hands-On GPU Programming with Python and CUDA

By: Dr. Brian Tuomanen

Overview of this book

Hands-On GPU Programming with Python and CUDA hits the ground running: you’ll start by learning how to apply Amdahl’s Law, use a code profiler to identify bottlenecks in your Python code, and set up an appropriate GPU programming environment. You’ll then see how to “query” the GPU’s features and copy arrays of data to and from the GPU’s own memory. As you make your way through the book, you’ll launch code directly onto the GPU and write full blown GPU kernels and device functions in CUDA C. You’ll get to grips with profiling GPU code effectively and fully test and debug your code using Nsight IDE. Next, you’ll explore some of the more well-known NVIDIA libraries, such as cuFFT and cuBLAS. With a solid background in place, you will now apply your new-found knowledge to develop your very own GPU-based deep neural network from scratch. You’ll then explore advanced topics, such as warp shuffling, dynamic parallelism, and PTX assembly. In the final chapter, you’ll see some topics and applications related to GPU programming that you may wish to pursue, including AI, graphics, and blockchain. By the end of this book, you will be able to apply GPU programming to problems related to data science and high-performance computing.
Table of Contents (15 chapters)

To get the most out of this book

This is actually quite a technical subject. To this end, we will have to make a few assumptions regarding the reader's programming background. To this end, we will assume the following:

  • You have an intermediate level of programming experience in Python.
  • You are familiar with standard Python scientific packages, such as NumPy, SciPy, and Matplotlib.
  • You have an intermediate ability in any C-based programming language (C, C++, Java, Rust, Go, and so on).
  • You understand the concept of dynamic memory allocation in C (particularly how to use the C malloc and free functions.)

GPU programming is mostly applicable to fields that are very scientific or mathematical in nature, so many (if not most) of the examples will make use of some math. For this reason, we are assuming that the reader has some familiarity with first or second-year college mathematics, including:

  • Trigonometry (the sinusoidal functions: sin, cos, tan …)
  • Calculus (integrals, derivatives, gradients)
  • Statistics (uniform and normal distributions)
  • Linear Algebra (vectors, matrices, vector spaces, dimensionality).
Don't worry if you haven't learned some of these topics, or if it's been a while, as we will try to review some of the key programming and math concepts as we go along.

We will be making another assumption here. Remember that we will be working only with CUDA in this text, which is a proprietary programming language for NVIDIA hardware. We will, therefore, need to have some specific hardware in our possession before we get started. So, I will assume that the reader has access to the following:

  • A 64-bit x86 Intel/AMD-based PC
  • 4 Gigabytes (GB) of RAM or more
  • An entry-level NVIDIA GTX 1050 GPU (Pascal Architecture) or better

The reader should know that most older GPUs will probably work fine with most, if not all, examples in this text, but the examples in this text have only been tested on a GTX 1050 under Windows 10 and a GTX 1070 under Linux. Specific instructions regarding setup and configuration are given in Chapter 2, Setting Up Your GPU Programming Environment.

Download the example code files

You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

  1. Log in or register at www.packt.com.
  2. Select the SUPPORT tab.
  3. Click on Code Downloads & Errata.
  4. Enter the name of the book in the Search box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

  • WinRAR/7-Zip for Windows
  • Zipeg/iZip/UnRarX for Mac
  • 7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Hands-On-GPU-Programming-with-Python-and-CUDA. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

Conventions used

There are a number of text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "We can now use the cublasSaxpy function."

A block of code is set as follows:

cublas.cublasDestroy(handle)
print 'cuBLAS returned the correct value: %s' % np.allclose(np.dot(A,x), y_gpu.get())

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

def compute_gflops(precision='S'):

if precision=='S':
float_type = 'float32'
elif precision=='D':
float_type = 'float64'
else:
return -1

Any command-line input or output is written as follows:

$ run cublas_gemm_flops.py

Bold: Indicates a new term, an important word, or words that you see on screen. For example, words in menus or dialog boxes appear in the text like this.

Warnings or important notes appear like this.
Tips and tricks appear like this.