Monitoring and logging counters or additional information is all well and good, but it can be intimidating to know how to actually find the information you need when troubleshooting a problem with an application. In this section, we will look at how Hadoop stores logs and system information. We can distinguish three typologies of logs, as follows:
YARN applications, including MapReduce jobs
Daemon logs (NameNode and ResourceManager)
Services that log non-distributed workloads, for example, HiveServer2 logging to
Next to these log typologies, Hadoop exposes a number of metrics at filesystem (the storage availability, replication factor, and number of blocks) and system level. As mentioned, both Apache Ambari and Cloudera Manager, which centralize access to debug information, do a nice job as the frontend. However, under the hood, each service logs to either HDFS or the single-node filesystem. Furthermore, YARN, MapReduce, and HDFS expose their logfiles and metrics via...