Book Image

HBase High Performance Cookbook

By : Ruchir Choudhry
Book Image

HBase High Performance Cookbook

By: Ruchir Choudhry

Overview of this book

Apache HBase is a non-relational NoSQL database management system that runs on top of HDFS. It is an open source, disturbed, versioned, column-oriented store and is written in Java to provide random real-time access to big Data. We’ll start off by ensuring you have a solid understanding the basics of HBase, followed by giving you a thorough explanation of architecting a HBase cluster as per our project specifications. Next, we will explore the scalable structure of tables and we will be able to communicate with the HBase client. After this, we’ll show you the intricacies of MapReduce and the art of performance tuning with HBase. Following this, we’ll explain the concepts pertaining to scaling with HBase. Finally, you will get an understanding of how to integrate HBase with other tools such as ElasticSearch. By the end of this book, you will have learned enough to exploit HBase for boost system performance.
Table of Contents (19 chapters)
HBase High Performance Cookbook
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
7
Large-Scale MapReduce
Index

LZ4 compressor


This library provides access to two compression methods, both of which generate a valid LZ4 stream:

  1. Fast scan (LZ4):

    • Low memory footprint (~16 KB)

    • Very fast (fast scan with skipping heuristics in case the input looks incompressible)

    • Reasonable compression ratio (depending on the redundancy of the input)

  2. High compression (LZ4 HC):

    • Medium memory footprint (~ 256 KB)

    • Rather slow (~10 times slower than LZ4)

A good compression ratio (depending on the size and the redundancy of the input).

The streams produced by these two compression algorithms use the same compression format; they are very fast to decompress and can be decompressed by the same decompressor instance.

How to do it…

LZ4 comes bundled with Hadoop. Check for the shared object(so) file, libhadoop.so, which has proper read, write, and execute rights while starting HBase. This can be done post setup too by creating a symbolic link from HBase libraries to the Hadoop native libraries.

For example, suppose your platform is Linux-amd64...