MapReduce is the primary processing model supported in Hadoop 1. It follows a divide and conquer model for processing data made popular by a 2006 paper by Google (http://research.google.com/archive/mapreduce.html) and has foundations both in functional programming and database research. The name itself refers to two distinct steps applied to all input data, a
map function and a
Every MapReduce application is a sequence of jobs that build atop this very simple model. Sometimes, the overall application may require multiple jobs, where the output of the
reduce stage from one is the input to the
map stage of another, and sometimes there might be multiple
reduce functions, but the core concepts remain the same.
We will introduce the MapReduce model by looking at the nature of the
reduce functions and then describe the Java API required to build implementations of the functions. After showing some examples, we will walk through a MapReduce execution to...