Book Image

HBase High Performance Cookbook

By : Ruchir Choudhry
Book Image

HBase High Performance Cookbook

By: Ruchir Choudhry

Overview of this book

Apache HBase is a non-relational NoSQL database management system that runs on top of HDFS. It is an open source, disturbed, versioned, column-oriented store and is written in Java to provide random real-time access to big Data. We’ll start off by ensuring you have a solid understanding the basics of HBase, followed by giving you a thorough explanation of architecting a HBase cluster as per our project specifications. Next, we will explore the scalable structure of tables and we will be able to communicate with the HBase client. After this, we’ll show you the intricacies of MapReduce and the art of performance tuning with HBase. Following this, we’ll explain the concepts pertaining to scaling with HBase. Finally, you will get an understanding of how to integrate HBase with other tools such as ElasticSearch. By the end of this book, you will have learned enough to exploit HBase for boost system performance.
Table of Contents (19 chapters)
HBase High Performance Cookbook
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
7
Large-Scale MapReduce
Index

Working with HDFS


To get the best performance from HBase, it is essential to get optimal performance from Hadoop/HDFS.

There are various parameters we can look at, but we will limit ourselves to make sure we get the benefits.

Multiple disk mount point:

Df.datanode.data.dir  -> use all attached disks to data node.
Block Side(DFS) =128 MB
Local file system buffer 
Io.file.buffer.size=131072 (128k)
Io.sort.factor=50 to 100
Data Node and NameNode  conconcurry
Dfs.namenode.handler.count(131072)
Dfs..datanode,max.transfer.tthread 4096

Give HDFS as many paths as possible to spread the disk I/O around and to increase the capacity of HDFS.

How to do it…

Please open the dfs.block.size section from dfs-site.xml and mapred.min.split.size from mapred.max.split.size in mapred-site.xml.

The split of the input size and the total input data size can be changed and can be mapped to the block size.

This also helps us reduce the number of map tasks. If the map task is reduced, the performance will increase. This...