Book Image

Mastering Parallel Programming with R

By : Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup
Book Image

Mastering Parallel Programming with R

By: Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup

Overview of this book

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources. Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.
Table of Contents (13 chapters)

The MPI standard


You can view the complete version 3.0 MPI standard, which runs to a desk-walloping 822 pages in the form of a PDF Report at (MPI Ref) http://www.mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf.

At the time of writing, version 3.1 of the MPI standard is published (June 2015). Although we will be focusing on the previous version, that is 3.0, the differences are not material for our purposes. MPI version 3.0 is both mature and comprehensive.

Due to some of the limitations of R and, in particular, its inherent single-threaded nature, only a subset of the MPI standard is implemented in either Rmpi or pbdMPI. However, all the basics for point-to-point and collective group communications are available, and we shall explore these through the remainder of this chapter. To begin, we need to understand some basic concepts that apply to the world of MPI.

The MPI universe

MPI considers each separate thread of computation to be a process, and each process is assigned a unique rank, which is...