Book Image

OpenStack Sahara Essentials

By : Omar Khedher
Book Image

OpenStack Sahara Essentials

By: Omar Khedher

Overview of this book

The Sahara project is a module that aims to simplify the building of data processing capabilities on OpenStack. The goal of this book is to provide a focused, fast paced guide to installing, configuring, and getting started with integrating Hadoop with OpenStack, using Sahara. The book should explain to users how to deploy their data-intensive Hadoop and Spark clusters on top of OpenStack. It will also cover how to use the Sahara REST API, how to develop applications for Elastic Data Processing on Openstack, and setting up hadoop or spark clusters on Openstack.
Table of Contents (14 chapters)

Planning a Hadoop deployment


Basically, a cluster defines a set of networked nodes or instances that work together. The building blocks in Hadoop clustering terminology can be categorized as Master and Slaves nodes. Each category will run a specific number of daemons to store data and run parallel computations tasks on all this data (MapReduce). The cluster nodes can assign different roles, which can be identified as the following:

  • NameNode: This is the orchestrator and centerpiece of the Hadoop cluster, which stores the filesystem metadata.

  • JobTracker: This is where the parallel data processing occurs using MapReduce.

  • DataNode: This role can be assigned to the majority of Slave nodes that present the horse-worker of the cluster. It might be presumed that a data node daemon is a slave to the name node.

  • TaskTracker: Like data node, this role can be assigned to a slave node as well. A task tracker daemon is a slave to the job tracker node.

Note

With YARN Hadoop V2, there are a few changes from...