Book Image

Hands-On GPU Computing with Python

By : Avimanyu Bandyopadhyay
Book Image

Hands-On GPU Computing with Python

By: Avimanyu Bandyopadhyay

Overview of this book

GPUs are proving to be excellent general purpose-parallel computing solutions for high-performance tasks such as deep learning and scientific computing. This book will be your guide to getting started with GPU computing. It begins by introducing GPU computing and explaining the GPU architecture and programming models. You will learn, by example, how to perform GPU programming with Python, and look at using integrations such as PyCUDA, PyOpenCL, CuPy, and Numba with Anaconda for various tasks such as machine learning and data mining. In addition to this, you will get to grips with GPU workflows, management, and deployment using modern containerization solutions. Toward the end of the book, you will get familiar with the principles of distributed computing for training machine learning models and enhancing efficiency and performance. By the end of this book, you will be able to set up a GPU ecosystem for running complex applications and data models that demand great processing capabilities, and be able to efficiently manage memory to compute your application effectively and quickly.
Table of Contents (17 chapters)
Free Chapter
1
Section 1: Computing with GPUs Introduction, Fundamental Concepts, and Hardware
5
Section 2: Hands-On Development with GPU Programming
11
Section 3: Containerization and Machine Learning with GPU-Powered Python

Comparing GPU programmable platforms on NVIDIA and AMD

So far, we have explored the scope of computing on NVIDIA and AMD GPUs through two separate chapters. Now, let's specifically look into the comparisons between their respective APIs:

NVIDIA CUDA

AMD ROCm

The API is called Compute Unified Device Architecture

The API is called Radeon Open Compute platform

Proprietary

Open source

Released in 2007

Released in 2016

Wider support

Still under adoption and very actively catching up

Significant number of programmable libraries

Fewer libraries than CUDA but active ongoing development

Cannot be used with non-NVIDIA devices

Cross-platform independence due to open standards

CUDA-C language being used

HIP for cross-platform; HC for AMD GPUs

.cu extension used for files

.cpp extension used for files

Non-portable

CUDA code...