In this chapter, we will take our first look at a lower level of parallelism: explicit message passing between multiple communicating R processes. We will utilize the standard Message Passing Interface (MPI) API available to us in a number of forms on laptops, cloud clusters, and supercomputers.
In this chapter, you will learn about:
The MPI API and how to use this via the two different R packages,
Rmpi
andpbdMPI
, together with the OpenMPI implementation of the communications subsystemBlocking and non-blocking point-to-point communications
Group-based collective communications
In the next two chapters, we will explore a more advanced use of MPI, including grid-based parallel processing and running R to scale on a real-life supercomputer; however, for now, we will take an introductory tour of MPI and once again focus on our own Mac computer as the target compute environment; the information required to get you up and running with MPI on Microsoft Windows...