Book Image

Learning YARN

Book Image

Learning YARN

Overview of this book

Today enterprises generate huge volumes of data. In order to provide effective services and to make smarter and more intelligent decisions from these huge volumes of data, enterprises use big-data analytics. In recent years, Hadoop has been used for massive data storage and efficient distributed processing of data. The Yet Another Resource Negotiator (YARN) framework solves the design problems related to resource management faced by the Hadoop 1.x framework by providing a more scalable, efficient, flexible, and highly available resource management framework for distributed data processing. This book starts with an overview of the YARN features and explains how YARN provides a business solution for growing big data needs. You will learn to provision and manage single, as well as multi-node, Hadoop-YARN clusters in the easiest way. You will walk through the YARN administration, life cycle management, application execution, REST APIs, schedulers, security framework and so on. You will gain insights about the YARN components and features such as ResourceManager, NodeManager, ApplicationMaster, Container, Timeline Server, High Availability, Resource Localisation and so on. The book explains Hadoop-YARN commands and the configurations of components and explores topics such as High Availability, Resource Localization and Log aggregation. You will then be ready to develop your own ApplicationMaster and execute it over a Hadoop-YARN cluster. Towards the end of the book, you will learn about the security architecture and integration of YARN with big data technologies like Spark and Storm. This book promises conceptual as well as practical knowledge of resource management using YARN.
Table of Contents (20 chapters)
Learning YARN
Credits
About the Authors
Acknowledgments
About the Reviewers
www.PacktPub.com
Preface
Index

The Hadoop-YARN single node installation


In a single node installation, all the Hadoop-YARN daemons (NameNode, ResourceManager, DataNode, and NodeManager) run on a single node as separate Java processes. You will need only one Linux machine with a minimum of 2 GB RAM and 15 GB free disk space.

Prerequisites

Before starting with the installation steps, make sure that you prepare the node as specified in the above topic.

  • The hostname used in the single node installation is localhost with 127.0.0.1 as the IP address. It is known as the loopback address for a machine. You need to make sure that the /etc/hosts file contains the resolution for the loopback address. The loopback entry will look like this:

    127.0.0.1    localhost
    
  • The passwordless SSH is configured for localhost. To ensure this, execute the following command:

    ssh-copy-id localhost
    

Installation steps

After preparing your node for Hadoop, you need to follow a simple five-step process to install and run Hadoop on your Linux machine.

Step...