There can be multiple situations where you want to decommission one or more DataNodes from an HDFS cluster. This recipe shows how to gracefully decommission DataNodes without incurring data loss.
The following steps show you how to decommission DataNodes gracefully:
If your cluster doesn't have it, add an
exclude
file to the cluster. Create an empty file in the NameNode and point to it from the$HADOOP_HOME/etc/hadoop/hdfs-site.xml
file by adding the following property. Restart the NameNode:<property> <name>dfs.hosts.exclude</name> <value>FULL_PATH_TO_THE_EXCLUDE_FILE</value> <description>Names a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded.</description> </property>
Add the hostnames of the nodes that are to be decommissioned to the
exclude
file.Run the following command...