Book Image

Mastering Parallel Programming with R

By : Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup
Book Image

Mastering Parallel Programming with R

By: Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup

Overview of this book

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources. Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.
Table of Contents (13 chapters)

Understanding parallel efficiency


Let's first go right back to the very beginning and consider why we might choose to write a parallel program in the first place.

The simple answer, of course, is that we want to speed up our algorithm and want to compute the answer much faster than we can do simply by running in serial, in which only a single thread of program execution can be utilized.

In this day and age of big data, we will extend this view to cover the otherwise incomputable, where the resources of a single machine architecture make it intractable to compute a complex algorithm across a massive scale of data; therefore, we have to employ thousands upon thousands of computational cores, terabytes of memory, petabytes of storage, and a supporting management infrastructure that can cope with the inevitable runtime failure of individual components during the aggregate lifetime of the computation of potentially millions of hours.

Another approach to utilizing parallelization, and arguably its...