Book Image

Mastering Parallel Programming with R

By : Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup
Book Image

Mastering Parallel Programming with R

By: Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup

Overview of this book

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources. Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.
Table of Contents (13 chapters)

The MPI API


We will divide our coverage of the MPI API into two: firstly, point-to-point communications followed by group-wise collective communications. Additional functionality beyond the core communication is described later in the advanced MPI API section.

First however, we need to explain some differences in the approaches to parallelism adopted by Rmpi and pbdMPI. We have already discussed that Rmpi can run within an interactive R session, whereas the pbdMPI R programs can only be run using mpiexec from a command shell (Rmpi programs can also be run with mpiexec).

Rmpi adopts the master/worker paradigm and dynamically launches worker processes internally using MPI_Comm_spawn(), where the launching R session is the master and the workers form the computational cluster. Code blocks that may include MPI communication are then issued by the master for remote execution by the worker cluster, which each execute an Rmpi daemon-style R script actively waiting for the next command to be broadcast...