Book Image

Hands-On GPU Computing with Python

By : Avimanyu Bandyopadhyay
Book Image

Hands-On GPU Computing with Python

By: Avimanyu Bandyopadhyay

Overview of this book

GPUs are proving to be excellent general purpose-parallel computing solutions for high-performance tasks such as deep learning and scientific computing. This book will be your guide to getting started with GPU computing. It begins by introducing GPU computing and explaining the GPU architecture and programming models. You will learn, by example, how to perform GPU programming with Python, and look at using integrations such as PyCUDA, PyOpenCL, CuPy, and Numba with Anaconda for various tasks such as machine learning and data mining. In addition to this, you will get to grips with GPU workflows, management, and deployment using modern containerization solutions. Toward the end of the book, you will get familiar with the principles of distributed computing for training machine learning models and enhancing efficiency and performance. By the end of this book, you will be able to set up a GPU ecosystem for running complex applications and data models that demand great processing capabilities, and be able to efficiently manage memory to compute your application effectively and quickly.
Table of Contents (17 chapters)
Free Chapter
1
Section 1: Computing with GPUs Introduction, Fundamental Concepts, and Hardware
5
Section 2: Hands-On Development with GPU Programming
11
Section 3: Containerization and Machine Learning with GPU-Powered Python

Multiple ways to install DeepChem

In this section, we are going to discuss three ways of installing DeepChem in your system. The recommended way of installation, according to the official documentation, is to use a Conda-based distribution, such as Anaconda or Miniconda.

The three ways that we are sequentially going to show are via the following:

  • Google Colab
  • Conda on PyCharm
  • NVIDIA Docker

Let's first start with Google Colab, hands-on!

Installing Google Colab

This method is highly recommended to get started with DeepChem. Why? At this time of writing, Google has recently replaced Tesla K80 with Tesla T4, which is an AI inference accelerator GPU with tensor cores and 16 GB of memory:

As mentioned in Chapter 10, Accelerated...