Book Image

OpenStack Sahara Essentials

By : Omar Khedher
Book Image

OpenStack Sahara Essentials

By: Omar Khedher

Overview of this book

The Sahara project is a module that aims to simplify the building of data processing capabilities on OpenStack. The goal of this book is to provide a focused, fast paced guide to installing, configuring, and getting started with integrating Hadoop with OpenStack, using Sahara. The book should explain to users how to deploy their data-intensive Hadoop and Spark clusters on top of OpenStack. It will also cover how to use the Sahara REST API, how to develop applications for Elastic Data Processing on Openstack, and setting up hadoop or spark clusters on Openstack.
Table of Contents (14 chapters)

Boosting Elastic Data Processing performance


Processing data in Hadoop can be very expensive in terms of network latency cost. In a Hadoop cluster, all data is being distributed to all nodes residing in the cluster. Ideally, HDFS splits the data file into chunks to be analyzed by several nodes. Additionally, each chunk of data will be replicated across different machines for data loss resiliency. Basically, every chunk of data is treated in Hadoop as a record broken into a specific format depending on the application logic.

When it comes to processing an assigned record set of data by each node, the Hadoop framework maps each process to the location of data based on the knowledge for the HDFS. To avoid any unneeded network transfers, processes start reading the data located from the local disk for best computation results.

Eventually, this strategy is very cost-effective, which considers that moving computation is performance-wise than moving data. The technique of data-locality in Hadoop...