Processing data in Hadoop can be very expensive in terms of network latency cost. In a Hadoop cluster, all data is being distributed to all nodes residing in the cluster. Ideally, HDFS splits the data file into chunks to be analyzed by several nodes. Additionally, each chunk of data will be replicated across different machines for data loss resiliency. Basically, every chunk of data is treated in Hadoop as a record broken into a specific format depending on the application logic.
When it comes to processing an assigned record set of data by each node, the Hadoop framework maps each process to the location of data based on the knowledge for the HDFS. To avoid any unneeded network transfers, processes start reading the data located from the local disk for best computation results.
Eventually, this strategy is very cost-effective, which considers that moving computation is performance-wise than moving data. The technique of data-locality in Hadoop...