In the previous section, you learned to create a generic function that will decide, on the fly, how to divide and conquer using workers, or whether it's more beneficial to simply call the function in the main thread. Now that we have a generic parallelization mechanism in place, what kind of problems can we solve? In this section, we'll address the most typical concurrency scenarios that will benefit from a solid concurrency architecture.
A problem is embarrassingly parallel when it's obvious how the larger task can be broken down into smaller tasks. These smaller tasks don't depend on one another, which makes it even easier to start off a task that takes input and produces output without relying on the state of other workers. This again comes back to the functional programming, and the idea of referential transparency and no side-effects.
These are the types of problems we want to solve with concurrency—at least at first, during the difficult first...