Book Image

Mastering Parallel Programming with R

By : Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup
Book Image

Mastering Parallel Programming with R

By: Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup

Overview of this book

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources. Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.
Table of Contents (13 chapters)

Adaptive load balancing


Previously, we noted how important it is to create balanced workloads, where the compute time for each task is equal.

The task farm

When the nature of the problem is such that there are many more tasks available than workers and each task is truly independent, then a task farm is a simple parallel processing scheme that ensures 100% utilization of workers by the master feeding the next available task to the next free worker, as depicted in the following diagram:

Figure 5: Task farm operating with mixed independent variable compute tasks.

In this case, it does not matter that each task varies as to the amount of compute it requires as there is no intertask dependency during the compute phase (at least).

Efficient grid processing

When the nature of the problem is such that the workers must cooperate during the execution of their tasks, then workload variance across the workers can lead to poor utilization, with some workers having to wait for the others to complete intermediate...