Right, it's time to take a wee breather. In this chapter, we covered the basic concepts and the API for MPI. You learned how to utilize both the Rmpi
and pbdMPI
packages in conjunction with OpenMPI. We explored a number of simple examples of both blocking and non-blocking communications in R and also introduced the collective communications operations in MPI. We looked into the low-level implementation of Rmpi
package's own master/worker scheme to manage the execution of R code in parallel. You now have sufficient grounding to write a wide variety of highly scalable MPI programs in R.
In the next chapter, we will complete our discussion on MPI, work through a particular MPI example that introduces spatial grid-style parallelism, and cover the remaining slightly more esoteric MPI API functions available to us in R.