Each parallel algorithm comes with its own overhead, particularly in terms of setup, in apportioning the work among a set of processors and tear-down in compiling the aggregated results from the set of processors.
To get a handle on how we can approach reducing these overheads, let's first examine the process of result aggregation.
The following figure shows a very typical master-worker task farm-style approach utilizing 15 independent worker nodes. In this case, each separate task undertaken by the workers contributes to an overall result.
Each worker transmits the partial result it generates back to the master, and the master then processes all the partial results to generate the final accumulated result.
Let's also consider that each worker task takes the same amount of computational effort, and thus, each worker finishes its task at approximately the same moment in time.
It's not difficult to see from the flower...