Metrics are more relevant to the maintainers of the Hadoop clusters than its users. There might be many users who run MapReduce jobs on a cluster; they are concerned about MapReduce Counters and not the metrics, which are daemon specific. MapReduce counters talk about the number of mappers or reducers, number of bytes read or written to the HDFS and non-HDFS File System, how many spills happened, information about the shuffle phase, etc. However, for Hadoop administrators, metrics about the daemons are of more concern, in order to better understand the cluster.
Each of the daemons has a group of contexts for it. Some of the contexts, which are supported or rather available, are listed in the following table:
Hadoop 1.x |
Hadoop 2.x |
---|---|
jvm: for Java Virtual Machine |
yarn: for the YARN components |
dfs: for Distributed File System |
jvm: for Java Virtual Machine |
mapred: for JobTracker and TaskTracker |
dfs: for Distributed File System |
rpc: for Remote Procedure... |