Book Image

Mastering Parallel Programming with R

By : Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup
Book Image

Mastering Parallel Programming with R

By: Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup

Overview of this book

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources. Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.
Table of Contents (13 chapters)

Summary


In this chapter, we looked in detail at how to exploit the capability of the GPU in your laptop to perform computation on behalf of R programs through the use of the ROpenCL package. Along the way, you also learned a little about programming highly efficient kernel function code in the C programming language, with loop unrolling and a careful use of high speed memory.

As we noted, while the goal for OpenCL is one of heterogeneous portability, in which the same code can run on a variety of devices (including the CPU itself), the reality is that with GPUs in particular, there is room for code optimization that is tailored to the characteristics of the underlying device hardware to extract the maximum possible performance. Obtaining the best performance for a kernel function is about balancing memory access and exploiting vector processing, and ultimately requires your own experimentation.

In the next and final chapter, we will distill the essential lessons from the various different...