Book Image

Mastering Parallel Programming with R

By : Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup
Book Image

Mastering Parallel Programming with R

By: Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup

Overview of this book

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources. Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.
Table of Contents (13 chapters)

Inspecting and managing communications


For most types of parallel algorithms implemented in R, where the focus is mainly on statistical numerical programming as opposed to more symbol-based processing or implementing exotic system architectures with more unpredictable communication patterns, the following "advanced" API calls are not often used. Nevertheless, they enable the MPI processes to deal with out-of-bound communication, and to avoid waiting unnecessarily for a communication to complete when other processing may usefully be performed; so if your context permits, then it can certainly be more efficient to make use of them. You may, for example, be able to interleave the communication between successive iterations in a long-running computation.

The following table covers MPI_Probe for retrieving information about a completed communication, MPI_Test to check for completion of a communication, and MPI_Cancel for enabling you to rescind an uncompleted communication:

INSPECTING / MANAGING...