Book Image

Hadoop MapReduce v2 Cookbook - Second Edition: RAW

Book Image

Hadoop MapReduce v2 Cookbook - Second Edition: RAW

Overview of this book

Table of Contents (19 chapters)
Hadoop MapReduce v2 Cookbook Second Edition
Credits
About the Author
Acknowledgments
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Decommissioning DataNodes


There can be multiple situations where you want to decommission one or more DataNodes from an HDFS cluster. This recipe shows how to gracefully decommission DataNodes without incurring data loss.

How to do it...

The following steps show you how to decommission DataNodes gracefully:

  1. If your cluster doesn't have it, add an exclude file to the cluster. Create an empty file in the NameNode and point to it from the $HADOOP_HOME/etc/hadoop/hdfs-site.xml file by adding the following property. Restart the NameNode:

    <property>
    <name>dfs.hosts.exclude</name>
    <value>FULL_PATH_TO_THE_EXCLUDE_FILE</value>
    <description>Names a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded.</description>
    </property>
  2. Add the hostnames of the nodes that are to be decommissioned to the exclude file.

  3. Run the following command...