Book Image

Hadoop Real-World Solutions Cookbook

By : Jonathan R. Owens, Jon Lentz, Brian Femiano
Book Image

Hadoop Real-World Solutions Cookbook

By: Jonathan R. Owens, Jon Lentz, Brian Femiano

Overview of this book

<p>Helping developers become more comfortable and proficient with solving problems in the Hadoop space. People will become more familiar with a wide variety of Hadoop related tools and best practices for implementation.</p> <p>Hadoop Real-World Solutions Cookbook will teach readers how to build solutions using tools such as Apache Hive, Pig, MapReduce, Mahout, Giraph, HDFS, Accumulo, Redis, and Ganglia.</p> <p>Hadoop Real-World Solutions Cookbook provides in depth explanations and code examples. Each chapter contains a set of recipes that pose, then solve, technical challenges, and can be completed in any order. A recipe breaks a single problem down into discrete steps that are easy to follow. The book covers (un)loading to and from HDFS, graph analytics with Giraph, batch data analysis using Hive, Pig, and MapReduce, machine learning approaches with Mahout, debugging and troubleshooting MapReduce, and columnar storage and retrieval of structured data using Apache Accumulo.<br /><br />Hadoop Real-World Solutions Cookbook will give readers the examples they need to apply Hadoop technology to their own problems.</p>
Table of Contents (17 chapters)
Hadoop Real-World Solutions Cookbook
Credits
About the Authors
About the Reviewers
www.packtpub.com
Preface
Index

Monitoring cluster health using Ganglia


Ganglia is a monitoring system designed for use with clusters and grids. Hadoop can be configured to send periodic metrics to the Ganglia monitoring daemon, which is useful for diagnosing and monitoring the health of the Hadoop cluster. This recipe will explain how to configure Hadoop to send metrics to the Ganglia monitoring daemon.

Getting ready

Ensure that you have Ganglia Version 3.1 or better installed on all of the nodes in the Hadoop cluster. The Ganglia monitoring daemon (gmond) should be running on every worker node in the cluster. You will also need the Ganglia meta daemon (gmetad) running on at least one node, and another node running the Ganglia web frontend.

The following is an example with modified gmond.conf file that can be used by the gmond daemon:

cluster {
  name = "Hadoop Cluster"
  owner = "unspecified"
  latlong = "unspecified"
  url = "unspecified"
}

host {
  location = "my datacenter"
}

udp_send_channel {
  host = mynode.company...