Book Image

Mastering Parallel Programming with R

By : Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup
Book Image

Mastering Parallel Programming with R

By: Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup

Overview of this book

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources. Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.
Table of Contents (13 chapters)

Three steps to successful parallelization


The following three-step distilled guidance is intended to help you decide what form of parallelism might be best suited for your particular algorithm/problem and summarizes what you learned throughout this book. Necessarily, it applies a level of generalization, so approach these guidelines with due consideration:

  1. Determine the type of parallelism that may best apply to your algorithm.

    Is the problem you are solving more computationally bound or data bound? If the former, your problem may be amenable to GPUs (refer to Chapter 5, The Supercomputer in your Laptop, on OpenCL). If the latter, then your problem may be more amenable to cluster-based computing (refer to Chapter 1, Simple Parallelism with R), and if your problem requires a complex processing chain, then consider using the Spark framework (described in the bonus chapter).

    Can you divide the problem data/space to achieve a balanced workload across all processes, or do you need to employ an adaptive...