We will divide our coverage of the MPI API into two: firstly, point-to-point communications followed by group-wise collective communications. Additional functionality beyond the core communication is described later in the advanced MPI API section.
First however, we need to explain some differences in the approaches to parallelism adopted by Rmpi
and pbdMPI
. We have already discussed that Rmpi
can run within an interactive R session, whereas the pbdMPI
R programs can only be run using mpiexec
from a command shell (Rmpi
programs can also be run with mpiexec
).
Rmpi
adopts the master/worker paradigm and dynamically launches worker processes internally using MPI_Comm_spawn()
, where the launching R session is the master and the workers form the computational cluster. Code blocks that may include MPI communication are then issued by the master for remote execution by the worker cluster, which each execute an Rmpi
daemon-style R script actively waiting for the next command to be broadcast...