Book Image

HBase Administration Cookbook

By : Yifeng Jiang
Book Image

HBase Administration Cookbook

By: Yifeng Jiang

Overview of this book

As an Open Source distributed big data store, HBase scales to billions of rows, with millions of columns and sits on top of the clusters of commodity machines. If you are looking for a way to store and access a huge amount of data in real-time, then look no further than HBase.HBase Administration Cookbook provides practical examples and simple step-by-step instructions for you to administrate HBase with ease. The recipes cover a wide range of processes for managing a fully distributed, highly available HBase cluster on the cloud. Working with such a huge amount of data means that an organized and manageable process is key and this book will help you to achieve that.The recipes in this practical cookbook start from setting up a fully distributed HBase cluster and moving data into it. You will learn how to use all of the tools for day-to-day administration tasks as well as for efficiently managing and monitoring the cluster to achieve the best performance possible. Understanding the relationship between Hadoop and HBase will allow you to get the best out of HBase so the book will show you how to set up Hadoop clusters, configure Hadoop to cooperate with HBase, and tune its performance.
Table of Contents (16 chapters)
HBase Administration Cookbook
Credits
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Preface

Restoring HBase data by importing dump files from HDFS


The HBase Import utility is used to load data that has been exported by the Export utility into an existing HBase table. It is the process to restore data from the Export utility backup solution.

We will look at the usage of the Import utility in this recipe.

Getting ready

First, start your HDFS and HBase cluster.

We will import the files that we exported in the previous recipe into our hly_temp table. If you do not have those dump files, refer to the Exporting HBase table to dump files on HDFS recipe, to generate the dump files in advance. We assume the dump files are saved in the /backup/hly_temp directory.

The Import utility uses MapReduce to import data. Add the HBase configurable file (hbase-site.xml) and dependency JAR files to Hadoop class path on your client node.

How to do it...

To import dump files into the hly_temp table:

  1. 1. Connect to your HBase cluster via HBase Shell and create the target table if it does not exist:

    hbase>...