Book Image

Hands-On GPU Computing with Python

By : Avimanyu Bandyopadhyay
Book Image

Hands-On GPU Computing with Python

By: Avimanyu Bandyopadhyay

Overview of this book

GPUs are proving to be excellent general purpose-parallel computing solutions for high-performance tasks such as deep learning and scientific computing. This book will be your guide to getting started with GPU computing. It begins by introducing GPU computing and explaining the GPU architecture and programming models. You will learn, by example, how to perform GPU programming with Python, and look at using integrations such as PyCUDA, PyOpenCL, CuPy, and Numba with Anaconda for various tasks such as machine learning and data mining. In addition to this, you will get to grips with GPU workflows, management, and deployment using modern containerization solutions. Toward the end of the book, you will get familiar with the principles of distributed computing for training machine learning models and enhancing efficiency and performance. By the end of this book, you will be able to set up a GPU ecosystem for running complex applications and data models that demand great processing capabilities, and be able to efficiently manage memory to compute your application effectively and quickly.
Table of Contents (17 chapters)
Free Chapter
1
Section 1: Computing with GPUs Introduction, Fundamental Concepts, and Hardware
5
Section 2: Hands-On Development with GPU Programming
11
Section 3: Containerization and Machine Learning with GPU-Powered Python

Computing on AMD APUs and GPUs

AMD-based GPU programmable platforms are centered around the Heterogeneous System Architecture (HSA). HSA is a cross-vendor set of specifications that allows the integration of CPUs and GPUs on the same bus, with shared memory and tasks. The HSA Foundation continues to be developed by AMD and many other members. AMD was one of the founding members of HSA.

HSA was developed with a programmer's perspective, to take care of the issue of prolonged data transfer between memories, especially when both CPUs and GPUs are involved (as is the norm when programming on CUDA or OpenCL).

Accelerated processing units (APUs)

Originally started as the Fusion project in 2006, accelerated processing units...