Book Image

Hadoop Operations and Cluster Management Cookbook

By : Shumin Guo
Book Image

Hadoop Operations and Cluster Management Cookbook

By: Shumin Guo

Overview of this book

<p>We are facing an avalanche of data. The unstructured data we gather can contain many insights that could hold the key to business success or failure. Harnessing the ability to analyze and process this data with Hadoop is one of the most highly sought after skills in today's job market. Hadoop, by combining the computing and storage powers of a large number of commodity machines, solves this problem in an elegant way!</p> <p>Hadoop Operations and Cluster Management Cookbook is a practical and hands-on guide for designing and managing a Hadoop cluster. It will help you understand how Hadoop works and guide you through cluster management tasks.</p> <p>This book explains real-world, big data problems and the features of Hadoop that enables it to handle such problems. It breaks down the mystery of a Hadoop cluster and will guide you through a number of clear, practical recipes that will help you to manage a Hadoop cluster.</p> <p>We will start by installing and configuring a Hadoop cluster, while explaining hardware selection and networking considerations. We will also cover the topic of securing a Hadoop cluster with Kerberos, configuring cluster high availability and monitoring a cluster. And if you want to know how to build a Hadoop cluster on the Amazon EC2 cloud, then this is a book for you.</p>
Table of Contents (15 chapters)
Hadoop Operations and Cluster Management Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Choosing from Hadoop alternatives


Although Hadoop has been very successful for most of the Big Data problems, it is not an optimal choice in many situations. In this recipe, we will introduce a few Hadoop alternatives.

Getting ready

Hadoop has the following drawbacks as a Big Data platform:

  • As an open source software, Hadoop is difficult to configure and manage, mainly due to the instability of the software and lack of properly maintained documentation and technical support

  • Hadoop is not an optimal choice for real-time, responsive Big Data applications

  • Hadoop is not a good fit for large graph datasets

Because of the preceding drawbacks as well as other reasons, such as special data processing requirements, we need to make an alternative choice.

Tip

Hadoop is not a good choice for data that is not categorized as Big Data; for example, data that has the following properties: small datasets and datasets with processing that requires transaction and synchronization.

How to do it…

We can choose Hadoop alternatives using the following guidelines:

  1. Choose Enterprise Hadoop if there is no qualified Hadoop administrator and there is sufficient budget for deploying a Big Data platform.

  2. Choose Spark or Storm if an application requires real-time data processing.

  3. Choose GraphLab if an application requires handling of large graph datasets.

How it works…

Enterprise Hadoop refers to Hadoop distributions by some Hadoop-oriented companies. Compared with the community Hadoop releases, Enterprise Hadoop distributions are enterprise ready, easy to configure, and sometimes new features are added. In addition, the training and support services provided by these companies make it much easier for organizations to adopt the Hadoop Big Data platform. Famous Hadoop-oriented companies include: Cloudera, Horntonworks, MapR, Hadapt, and so on.

  • Cloudera is one of the most famous companies that delivers Enterprise Hadoop Big Data solutions. It provides Hadoop consulting, training, and certification services. It is also one of the biggest contributors of the Hadoop codebase. Their Big Data solution uses Cloudera Desktop as the cluster management interface. You can learn more from www.cloudera.com.

  • Hortonworks and MapR both provide featured Hadoop distributions and Hadoop-based Big Data solutions. You can get more details from www.hortonworks.com and www.mapr.com.

  • Hadapt differentiates itself from the other Hadoop-oriented companies by the goal of integrating structured, semi-structured, and unstructured data into a uniform data operation platform. Hadapt unifies SQL and Hadoop and makes it easy to handle different varieties of data. You can learn more at http://hadapt.com/.

  • Spark is a real-time in-memory Big Data processing platform. It can be up to 40 times faster than Hadoop. So it is ideal for iterative and responsive Big Data applications. Besides, Spark can be integrated with Hadoop, and the Hadoop-compatible storage APIs enable it to access any Hadoop-supported systems. More information about Spark can be learned from http://spark-project.org/.

  • Storm is another famous real-time Big Data processing platform. It is developed and open sourced by Twitter. For more information, please check http://storm-project.net/.

  • GraphLab is an open source distributed system developed at Carnegie Mellon University. It was targeted for handling sparse iterative graph algorithms. For more information, please visit: http://graphlab.org/.

    Tip

    The MapReduce framework parallels computation by splitting data into a number of distributed nodes. Some large natural graph data, such as social network data, has the problem of being hard to partition and thus, hard to split for Hadoop parallel processing. The performance can be severely panelized if Hadoop is used.

  • Other Hadoop-like implementations include Phoenix (http://mapreduce.stanford.edu/), which is a shared memory implementation of the MapReduce data processing framework, and Haloop (http://code.google.com/p/haloop/), which is a modified version of Hadoop for iterative data processing.

    Tip

    Phoenix and Haloop do not have an active community and they are not recommended for production deployment.

There's more...

As the Big Data problem floods the whole world, many systems have been designed to deal with the problem. Two such famous systems that do not follow the MapReduce route are Message Passing Interface (MPI) and High Performance Cluster Computing (HPCC).

MPI

MPI is a library specification for message passing. Different from Hadoop, MPI was designed for high performance on both massively parallel machines and on workstation clusters. In addition, MPI lacks fault tolerance and performance will be bounded when data becomes large. More documentation about MPI can be found at http://www.mpi-forum.org/.

HPCC

HPCC is an open source Big Data platform developed by HPCC systems, which was acquired by LexisNexis Risk Solutions. It achieves high performance by clustering commodity hardware. The system includes configurations for both parallel batch processing and high performance online query applications using indexed data files. The HPCC platform contains two cluster processing subsystems: Data Refinery subsystem and Data Delivery subsystem. The Data Refinery subsystem is responsible for the general processing of massive raw data, and the Data Delivery subsystem is responsible for the delivery of clean data for online queries and analytics. More information about HPCC can be found at http://hpccsystems.com/.