We continue our tour of MPI in this chapter by focusing on the more advanced aspects of message passing. In particular, we explore a specific structured approach to distributed computing for efficiently processing spatially organized data, known as Grid Parallelism. We will work through a detailed example of image processing that will illustrate the use of non-blocking communications, including localized patterns of inter-process message exchange, based on appropriately configuring an Rmpi
master/worker cluster.
In this chapter, we will cover additional MPI API calls, including MPI_Cart_create()
, MPI_Cart_rank()
, MPI_Probe
, and MPI_Test
, and briefly revisit parLapply()
which we first encountered in Chapter 1, Simple Parallelism with R (and even snow gets a mention).
So, without further ado, let's discover how to perform spatially oriented parallel processing using MPI in R.