Book Image

Hadoop 2.x Administration Cookbook

By : Aman Singh
Book Image

Hadoop 2.x Administration Cookbook

By: Aman Singh

Overview of this book

Hadoop enables the distributed storage and processing of large datasets across clusters of computers. Learning how to administer Hadoop is crucial to exploit its unique features. With this book, you will be able to overcome common problems encountered in Hadoop administration. The book begins with laying the foundation by showing you the steps needed to set up a Hadoop cluster and its various nodes. You will get a better understanding of how to maintain Hadoop cluster, especially on the HDFS layer and using YARN and MapReduce. Further on, you will explore durability and high availability of a Hadoop cluster. You’ll get a better understanding of the schedulers in Hadoop and how to configure and use them for your tasks. You will also get hands-on experience with the backup and recovery options and the performance tuning aspects of Hadoop. Finally, you will get a better understanding of troubleshooting, diagnostics, and best practices in Hadoop administration. By the end of this book, you will have a proper understanding of working with Hadoop clusters and will also be able to secure, encrypt it, and configure auditing for your Hadoop clusters.
Table of Contents (20 chapters)
Hadoop 2.x Administration Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface
Index

Configuring HDFS block size


Getting ready

To step through the recipes in this chapter, make sure you have completed the recipes in Chapter 1, Hadoop Architecture and Deployment or at least understand the basic Hadoop cluster setup.

How to do it...

  1. ssh to the master node, which is Namenode, and navigate to the directory where Hadoop is installed. In the previous chapter, Hadoop was installed at /opt/cluster/hadoop:

  2. Change to the Hadoop user, or any other user that is running Hadoop, by using the following:

    $ sudo su - hadoop
    
  3. Edit the hdfs-site.xml file and modify the parameter to reflect the changes, as shown in the following screenshot:

  4. dfs.blocksize is the parameter that decides on the value of the HDFS block size. The unit is bytes and the default value is 64 MB in Hadoop 1 and 128 MB in Hadoop 2. The block size can be configured according to the need.

  5. Once the changes are made to hdfs-site.xml, copy the file across all nodes in the cluster.

  6. Then restart the Namenode and...