Book Image

Mastering Parallel Programming with R

By : Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup
Book Image

Mastering Parallel Programming with R

By: Simon R. Chapple, Terence Sloan, Thorsten Forster, Eilidh Troup

Overview of this book

R is one of the most popular programming languages used in data science. Applying R to big data and complex analytic tasks requires the harnessing of scalable compute resources. Mastering Parallel Programming with R presents a comprehensive and practical treatise on how to build highly scalable and efficient algorithms in R. It will teach you a variety of parallelization techniques, from simple use of R’s built-in parallel package versions of lapply(), to high-level AWS cloud-based Hadoop and Apache Spark frameworks. It will also teach you low level scalable parallel programming using RMPI and pbdMPI for message passing, applicable to clusters and supercomputers, and how to exploit thousand-fold simple processor GPUs through ROpenCL. By the end of the book, you will understand the factors that influence parallel efficiency, including assessing code performance and implementing load balancing; pitfalls to avoid, including deadlock and numerical instability issues; how to structure your code and data for the most appropriate type of parallelism for your problem domain; and how to extract the maximum performance from your R code running on a variety of computer systems.
Table of Contents (13 chapters)

Chapter 2. Introduction to Message Passing

In this chapter, we will take our first look at a lower level of parallelism: explicit message passing between multiple communicating R processes. We will utilize the standard Message Passing Interface (MPI) API available to us in a number of forms on laptops, cloud clusters, and supercomputers.

In this chapter, you will learn about:

  • The MPI API and how to use this via the two different R packages, Rmpi and pbdMPI, together with the OpenMPI implementation of the communications subsystem

  • Blocking and non-blocking point-to-point communications

  • Group-based collective communications

In the next two chapters, we will explore a more advanced use of MPI, including grid-based parallel processing and running R to scale on a real-life supercomputer; however, for now, we will take an introductory tour of MPI and once again focus on our own Mac computer as the target compute environment; the information required to get you up and running with MPI on Microsoft Windows...