In this recipe, we will describe how to troubleshoot the error shown in the following DataNode logs:
2012-02-18 17:43:18,009 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.166.111.191:50010, storageID=DS-2072496811-10.168.130.82-50010-1321345166369, infoPort=50075, ipcPort=50020):DataXceiver
java.io.FileNotFoundException: /usr/local/hadoop/var/dfs/data/current/subdir6/blk_-8839555124496884481 (Too many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at org.apache.hadoop.hdfs.server.datanode.FSDataset.getBlockInputStream(FSDataset.java:1068)
To fix this issue, you will need root privileges on every node of the cluster. We assume you use the hadoop
user to start your HDFS cluster.