HBase has a close integration with Hadoop's MapReduce as it is built on top of the Apache Hadoop framework. Hadoop's MapReduce provides a distributed computation for high throughput data access, and Hadoop Distributed File System (HDFS) provides HBase with the storage layer with high availability, reliability, and durability for data.
Before we go into more details of how HBase integrates with Hadoop's MapReduce framework, let's first understand how this framework actually works.
There should be a system to process terabytes or petabytes of data and increase its performance linearly with the number of physical machines added. Apache Hadoop's MapReduce framework is designed to provide linearly scalable processing power for huge amounts of Big Data.
Let's discuss how MapReduce processes the data described in the preceding diagram. In MapReduce, the first step is the split process, which is responsible for dividing the input data into reasonably sized chunks...