Book Image

YARN Essentials

By : Fasale, Nirmal Kumar
Book Image

YARN Essentials

By: Fasale, Nirmal Kumar

Overview of this book

If you have a working knowledge of Hadoop 1.x but want to start afresh with YARN, this book is ideal for you. You will be able to install and administer a YARN cluster and also discover the configuration settings to fine-tune your cluster both in terms of performance and scalability. This book will help you develop, deploy, and run multiple applications/frameworks on the same shared YARN cluster.
Table of Contents (12 chapters)
Free Chapter
1
1. Need for YARN
9
9. YARN – Alternative Solutions
11
Index

The redesign idea

Initially, Hadoop was written solely as a MapReduce engine. Since it runs on a cluster, its cluster management components were also tightly coupled with the MapReduce programming paradigm.

The concepts of MapReduce and its programming paradigm were so deeply ingrained in Hadoop that one could not use it for anything else except MapReduce. MapReduce therefore became the base for Hadoop, and as a result, the only thing that could be run on Hadoop was a MapReduce job, batch processing. In Hadoop 1.x, there was a single JobTracker service that was overloaded with many things such as cluster resource management, scheduling jobs, managing computational resources, restarting failed tasks, monitoring TaskTrackers, and so on.

There was definitely a need to separate the MapReduce (specific programming model) part and the resource management infrastructure in Hadoop. YARN was the first attempt to perform this separation.

Limitations of the classical MapReduce or Hadoop 1.x

The main limitations of Hadoop 1.x can be categorized into the following areas:

  • Limited scalability:
    • Large Hadoop clusters reported some serious limitations on scalability. This is caused mainly by a single JobTracker service, which ultimately results in a serious deterioration of the overall cluster performance because of attempts to re-replicate data and overload live nodes, thus causing a network flood.
    • According to Yahoo!, the practical limits of such a design are reached with a cluster of ~5,000 nodes and 40,000 tasks running concurrently. Therefore, it is recommended that you create smaller and less powerful clusters for such a design.
  • Low cluster resource utilization:
    • The resources in Hadoop 1.x on each slave node (data node), are divided in terms of a fixed number of map and reduce slots.
    • Consider the scenario where a MapReduce job has already taken up all the available map slots and now wants more new map tasks to run. In this case, it cannot run new map tasks, even though all the reduce slots are still empty. This notion of a fixed number of slots has a serious drawback and results in poor cluster utilization.
  • Lack of support for alternative frameworks/paradigms:
    • The main focus of Hadoop right from the beginning was to perform computation on large datasets using parallel processing.
    • Therefore, the only programming model it supported was MapReduce.
    • With the current industry needs in terms of new use cases in the world of big data, many new and alternative programming models (such Apache Giraph, Apache Spark, Storm, Tez, and so on) are coming into the picture each day. There is definitely an increasing demand to support multiple programming paradigms besides MapReduce, to support the varied use cases that the big data world is facing.

YARN as the modern operating system of Hadoop

The MapReduce programming model is, no doubt, great for many applications, but not for everything in the world of computation. There are use cases that are best suited for MapReduce, but not all.

MapReduce is essentially batch-oriented, but support for real-time and near real-time processing are the emerging requirements in the field of big data.

YARN took cluster resource management capabilities from the MapReduce system so that new engines could use these generic cluster resource management capabilities. This lightened up the MapReduce system to focus on the data processing part, which it is good at and will ideally continue to be so.

YARN therefore turns into a data operating system for Hadoop 2.0, as it enables multiple applications to coexist in the same shared cluster. Refer to the following figure:

YARN as the modern operating system of Hadoop

YARN as a modern OS for Hadoop

What are the design goals for YARN

This section talks about the core design goals of YARN:

  • Scalability:
    • Scalability is a key requirement for big data. Hadoop was primarily meant to work on a cluster of thousands of nodes with commodity hardware. Also, the cost of hardware is reducing year-on-year.
    • YARN is therefore designed to perform efficiently on this network of a myriad of nodes.
  • High cluster utilization:
    • In Hadoop 1.x, the cluster resources were divided in terms of fixed size slots for both map and reduce tasks. This means that there could be a scenario where map slots might be full while reduce slots are empty, or vice versa. This was definitely not an optimal utilization of resources, and it needed further optimization.
    • YARN fine-grained resources in terms of RAM, CPU, and disk (containers), leading to an optimal utilization of the available resources.
  • Locality awareness:
    • This is a key requirement for YARN when dealing with big data; moving computation is cheaper than moving data.
    • This helps to minimize network congestion and increase the overall throughput of the system.
  • Multitenancy:
    • With the core development of Hadoop at Yahoo, primarily to support large-scale computation, HDFS also acquired a permission model, quotas, and other features to improve its multitenant operation.
    • YARN was therefore designed to support multitenancy in its core architecture. Since cluster resource allocation/management is at the heart of YARN, sharing processing and storage capacity across clusters was central to the design.
    • YARN has the notion of pluggable schedulers and the Capacity Scheduler with YARN has been enhanced to provide a flexible resource model, elastic computing, application limits, and other necessary features that enable multiple tenants to securely share the cluster in an optimized way.
  • Support for programming model:
    • The MapReduce programming model is no doubt great for many applications, but not for everything in the world of computation.
    • As the world of big data is still in its inception phase, organizations are heavily investing in R&D to develop new and evolving frameworks to solve a variety of problems that big data brings.
  • A flexible resource model:
    • Besides mismatch with the emerging frameworks’ requirements, the fixed number of slots for resources had serious problems. It was straightforward for YARN to come up with a flexible and generic resource management model.
  • A secure and auditable operation:
    • As Hadoop continued to grow to manage more tenants with a myriad of use cases across different industries, the requirements for isolation became more demanding.
    • Also, the authorization model lacked strong and scalable authentication. This is because Hadoop was designed with parallel processing in mind, with no comprehensive security. Security was an afterthought.
    • YARN understands this and adds security-related requirements into its design.
  • Reliability/availability:
    • Although fault tolerance is in the core design, in reality maintaining a large Hadoop cluster is a tedious task.
    • All issues related to high availability, failures, failures on restart, and reliability were therefore a core requirement for YARN.
  • Backward compatibility:
    • Hadoop 1.x has been in the picture for a while, with many successful production deployments across many industries. This massive installation base of MapReduce applications and the ecosystem of related projects, such as Hive, Pig, and so on, would not tolerate a radical redesign. Therefore, the new architecture reused as much code from the existing framework as possible, and no major surgery was conducted on it. This made MRv2 able to ensure satisfactory compatibility with MRv1 applications.