Basically, a cluster defines a set of networked nodes or instances that work together. The building blocks in Hadoop clustering terminology can be categorized as Master and Slaves nodes. Each category will run a specific number of daemons to store data and run parallel computations tasks on all this data (MapReduce). The cluster nodes can assign different roles, which can be identified as the following:
NameNode: This is the orchestrator and centerpiece of the Hadoop cluster, which stores the filesystem metadata.
JobTracker: This is where the parallel data processing occurs using MapReduce.
DataNode: This role can be assigned to the majority of Slave nodes that present the horse-worker of the cluster. It might be presumed that a data node daemon is a slave to the name node.
TaskTracker: Like data node, this role can be assigned to a slave node as well. A task tracker daemon is a slave to the job tracker node.