In the Parallelizing processing with pmap recipe, we found that while using pmap
is easy enough, knowing when to
use it is more complicated. Processing each task in the collection has to take enough time to make the costs of threading, coordinating processing, and communicating the data worth it. Otherwise, the program will spend more time concerned with how (parallelization) and not enough time with what (the task).
The way to get around this is to make sure that pmap
has enough to do at each step that it parallelizes. The easiest way to do that is to partition the input collection into chunks and run pmap
on groups of the input.
For this recipe, we'll use Monte Carlo methods to approximate pi. We'll compare a serial version against a naïve parallel version against a version that uses parallelization and partitions.