Book Image

Hadoop Operations and Cluster Management Cookbook

By : Shumin Guo
Book Image

Hadoop Operations and Cluster Management Cookbook

By: Shumin Guo

Overview of this book

<p>We are facing an avalanche of data. The unstructured data we gather can contain many insights that could hold the key to business success or failure. Harnessing the ability to analyze and process this data with Hadoop is one of the most highly sought after skills in today's job market. Hadoop, by combining the computing and storage powers of a large number of commodity machines, solves this problem in an elegant way!</p> <p>Hadoop Operations and Cluster Management Cookbook is a practical and hands-on guide for designing and managing a Hadoop cluster. It will help you understand how Hadoop works and guide you through cluster management tasks.</p> <p>This book explains real-world, big data problems and the features of Hadoop that enables it to handle such problems. It breaks down the mystery of a Hadoop cluster and will guide you through a number of clear, practical recipes that will help you to manage a Hadoop cluster.</p> <p>We will start by installing and configuring a Hadoop cluster, while explaining hardware selection and networking considerations. We will also cover the topic of securing a Hadoop cluster with Kerberos, configuring cluster high availability and monitoring a cluster. And if you want to know how to build a Hadoop cluster on the Amazon EC2 cloud, then this is a book for you.</p>
Table of Contents (15 chapters)
Hadoop Operations and Cluster Management Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Defining a Big Data problem


Generally, the definition of Big Data is data in large sizes that go beyond the ability of commonly used software tools to collect, manage, and process within a tolerable elapsed time. More formally, the definition of Big Data should go beyond the size of the data to include other properties. In this recipe, we will outline the properties that define Big Data in a formal way.

Getting ready

Ideally, data has the following three important properties: volume, velocity, and variety. In this book, we treat the value property of Big Data as the fourth important property. And, the value property also explains the reason why the Big Data problem exists.

How to do it…

Defining a Big Data problem involves the following steps:

  1. Estimate the volume of data. The volume should not only include the current data volume, for example in gigabytes or terabytes, but also should include the expected volume in the future.

    There are two types of data in the real world: static and nonstatic data. The volume of static data, for example national census data and human genomic data, will not change over time. While for nonstatic data, such as streaming log data and social network streaming data, the volume increases over time.

  2. Estimate the velocity of data. The velocity estimate should include how much data can be generated within a certain amount of time, for example during a day. For static data, the velocity is zero.

    The velocity property of Big Data defines the speed that data can be generated. This property will not only affect the volume of data, but also determines how fast a data processing system should handle the data.

  3. Identify the data variety. In other words, the data variety means the different sources of data, such as web click data, social network data, data in relational databases, and so on.

    Variety means that data differs syntactically or semantically. The difference requires specifically designed modules for each data variety to be integrated into the Big Data platform. For example, a web crawler is needed for getting data from the Web, and a data translation module is needed to transfer data from relational databases to a nonrelational Big Data platform.

  4. Define the expected value of data.

    The value property of Big Data defines what we can potentially derive from and how we can use Big Data. For example, frequent item sets can be mined from online click-through data for better marketing and more efficient deployment of advertisements.

How it works…

A Big Data platform can be described with the IPO (http://en.wikipedia.org/wiki/IPO_Model) model, which includes three components: input, process, and output. For a Big Data problem, the volume, velocity, and variety properties together define the input of the system, and the value property defines the output.

See also

  • The Building a Hadoop-based Big Data platform recipe