-
Book Overview & Buying
-
Table Of Contents
Mastering Parallel Programming with R
By :
In this chapter, we explored several more advanced aspects of message passing through its application to grid-based parallelism, including data segmentation and distribution for spatial operations, use of non-blocking communications, localized communication patterns between MPI processes, and how to map an SPMD style grid on to a standard Rmpi master/worker cluster. Whilst the illustrative example in image processing may not seem the most natural home for R programming, the knowledge gained through this example will be applicable to a wide range of large matrix-iterative algorithms.
We also covered MPI in detail by explaining the additional API routines geared to inspecting and managing outstanding communications, including MPI_Probe and MPI_Test.
We finished the chapter by reviewing how Rmpi can be used in conjunction with parLapply(), and touched on how you can run an MPI cluster across a simple network of workstations.
The grid-based processing framework that we have constructed...