The MapReduce framework was originally developed at Google, but it is now being adapted as the de facto standard for large scale data analysis.
In the MapReduce programming model, the basic unit of information is a key-value pair. The MapReduce program reads sets of such key-value pairs as input, and outputs new key-value pairs. The overall operation occurs in three different stages, Map-Shuffle-Reduce. All the stages of MapReduce are stateless, enabling them to run independently in a distributed environment. Mapper acts upon one pair at a time, whereas shuffle and reduce can act on multiple pairs. In many cases, shuffle is an optional stage of execution. All of the map tasks should finish before the start of Reduce phase. Overall a program written in MapReduce can undergo many rounds of MapReduce stages one by one. Please take a look at an example of MapReduce in Appendix C.
The Hadoop-based MapReduce framework architecture is shown in the following diagram. It is a master-slave architecture consisting of two major components in MapReduce architecture of MapReduce: JobTracker and TaskTracker.
JobTracker is responsible for monitoring and coordinating execution of jobs across different TaskTrackers in Hadoop nodes. Each Hadoop program is submitted to JobTracker, which then requests location of data being referred by the program. Once NameNode returns the location of DataNodes, JobTracker assigns the execution of jobs to respective TaskTrackers on the same machine where data is located. The work is then transferred to TaskTracker for execution. JobTracker keeps track of progress on job execution through heartbeat mechanism. This is similar to the heartbeat mechanism we have seen in HDFS. Based on heartbeat signal, JobTracker keeps the progress status updated. If TaskTracker fails to respond within stipulated time, JobTracker schedules this work to another TaskTracker. In case, if a TaskTracker reports failure of task to JobTracker, JobTracker may assign it to a different TaskTracker, or it may report it back to the client, or it may even end up marking the TaskTracker as unreliable.
TaskTracker are slaves deployed on Hadoop nodes. They are meant to serve requests from JobTracker. Each TaskTracker has an upper limit on number of tasks that can be executed on node, and they are called slots. Each task runs in its own JVM process, this minimizes impact on the TaskTracker parent process itself due to failure of tasks. The running tasks are then monitored by TaskTracker, and the status is maintained, which is later reported to JobTracker through heartbeat mechanism. To help us understand the concept, we have provided a MapReduce example in Appendix A, Use Cases for Big Data Search.