Book Image

Mastering Parallel Programming with R

By : Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup
Book Image

Mastering Parallel Programming with R

By: Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup

Overview of this book

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources. Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.
Table of Contents (13 chapters)

Summary


In this chapter, we explored several more advanced aspects of message passing through its application to grid-based parallelism, including data segmentation and distribution for spatial operations, use of non-blocking communications, localized communication patterns between MPI processes, and how to map an SPMD style grid on to a standard Rmpi master/worker cluster. Whilst the illustrative example in image processing may not seem the most natural home for R programming, the knowledge gained through this example will be applicable to a wide range of large matrix-iterative algorithms.

We also covered MPI in detail by explaining the additional API routines geared to inspecting and managing outstanding communications, including MPI_Probe and MPI_Test.

We finished the chapter by reviewing how Rmpi can be used in conjunction with parLapply(), and touched on how you can run an MPI cluster across a simple network of workstations.

The grid-based processing framework that we have constructed...