Book Image

Hadoop 2.x Administration Cookbook

By : Aman Singh
Book Image

Hadoop 2.x Administration Cookbook

By: Aman Singh

Overview of this book

Hadoop enables the distributed storage and processing of large datasets across clusters of computers. Learning how to administer Hadoop is crucial to exploit its unique features. With this book, you will be able to overcome common problems encountered in Hadoop administration. The book begins with laying the foundation by showing you the steps needed to set up a Hadoop cluster and its various nodes. You will get a better understanding of how to maintain Hadoop cluster, especially on the HDFS layer and using YARN and MapReduce. Further on, you will explore durability and high availability of a Hadoop cluster. You’ll get a better understanding of the schedulers in Hadoop and how to configure and use them for your tasks. You will also get hands-on experience with the backup and recovery options and the performance tuning aspects of Hadoop. Finally, you will get a better understanding of troubleshooting, diagnostics, and best practices in Hadoop administration. By the end of this book, you will have a proper understanding of working with Hadoop clusters and will also be able to secure, encrypt it, and configure auditing for your Hadoop clusters.
Table of Contents (20 chapters)
Hadoop 2.x Administration Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface
Index

Installation methods


Hadoop can be installed in multiple ways, either by using repository methods such as Yum/apt-get or by extracting the tarball packages. The project Bigtop http://bigtop.apache.org/ provides Hadoop packages for infrastructure, and can be used by creating a local repository of the packages.

All the steps are to be performed as the root user. It is expected that the user knows how to set up a yum repository and Linux basics.

Getting ready

You are going to need a Linux machine. You can either use the one which has been used in the previous task or set up a new node, which will act as repository server and host all the packages we need.

How to do it...

  1. Connect to a Linux machine that has at least 5 GB disk space to store the packages.

  2. If you are on CentOS or a similar distribution, make sure you have the package yum-utils installed. This package will provide the command reposync.

  3. Create a file bigtop.repo under /etc/yum.repos.d/. Note that the file name can be anything—only the extension must be .repo.

  4. See the following screenshot for the contents of the file:

  5. Execute the command reposync –r bigtop. It will create a directory named bigtop under the present working directory with all the packages downloaded to it.

  6. All the required Hadoop packages can be installed by configuring the repository we downloaded as a repository server.

How it works...

From step 2 to step 6, the user will be able to configure and use the Hadoop package repository. Setting up a Yum repository is not required, but it makes things easier if we have to do installations on hundreds of nodes. In larger setups, management systems such as Puppet or Chef will be used for deployment configuration to push configuration and packages to nodes.

In this chapter, we will be using the tarball package that was built in the first section to perform installations. This is the best way of learning about directory structure and the configurations needed.