Book Image

Mastering Parallel Programming with R

By : Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup
Book Image

Mastering Parallel Programming with R

By: Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup

Overview of this book

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources. Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.
Table of Contents (13 chapters)

Reducing the parallel overhead


Each parallel algorithm comes with its own overhead, particularly in terms of setup, in apportioning the work among a set of processors and tear-down in compiling the aggregated results from the set of processors.

To get a handle on how we can approach reducing these overheads, let's first examine the process of result aggregation.

The following figure shows a very typical master-worker task farm-style approach utilizing 15 independent worker nodes. In this case, each separate task undertaken by the workers contributes to an overall result.

Each worker transmits the partial result it generates back to the master, and the master then processes all the partial results to generate the final accumulated result.

Figure 2: Master-Worker flower style arrangement.

Let's also consider that each worker task takes the same amount of computational effort, and thus, each worker finishes its task at approximately the same moment in time.

It's not difficult to see from the flower...