Book Image

Hadoop Real-World Solutions Cookbook

By : Jonathan R. Owens, Jon Lentz, Brian Femiano
Book Image

Hadoop Real-World Solutions Cookbook

By: Jonathan R. Owens, Jon Lentz, Brian Femiano

Overview of this book

<p>Helping developers become more comfortable and proficient with solving problems in the Hadoop space. People will become more familiar with a wide variety of Hadoop related tools and best practices for implementation.</p> <p>Hadoop Real-World Solutions Cookbook will teach readers how to build solutions using tools such as Apache Hive, Pig, MapReduce, Mahout, Giraph, HDFS, Accumulo, Redis, and Ganglia.</p> <p>Hadoop Real-World Solutions Cookbook provides in depth explanations and code examples. Each chapter contains a set of recipes that pose, then solve, technical challenges, and can be completed in any order. A recipe breaks a single problem down into discrete steps that are easy to follow. The book covers (un)loading to and from HDFS, graph analytics with Giraph, batch data analysis using Hive, Pig, and MapReduce, machine learning approaches with Mahout, debugging and troubleshooting MapReduce, and columnar storage and retrieval of structured data using Apache Accumulo.<br /><br />Hadoop Real-World Solutions Cookbook will give readers the examples they need to apply Hadoop technology to their own problems.</p>
Table of Contents (17 chapters)
Hadoop Real-World Solutions Cookbook
Credits
About the Authors
About the Reviewers
www.packtpub.com
Preface
Index

Tuning MapReduce job parameters


The Hadoop framework is very flexible and can be tuned using a number of configuration parameters. In this recipe, we will discuss the function and purpose of different configuration parameters you can set for a MapReduce job.

Getting ready

Ensure that you have a MapReduce job which has a job class that extends the Hadoop Configuration class and implements the Hadoop Tool interface, such as any MapReduce application we have written so far in this book.

How to do it...

Follow these steps to customize MapReduce job parameters:

  1. Ensure you have a MapReduce job class which extends the Hadoop Configuration class and the Tool interface.

  2. Use the ToolRunner.run() static method to run your MapReduce job, as shown in the following example:

    public static void main(String[] args) throws Exception {
            int exitCode = ToolRunner.run(new MyMapReduceJob(), args);
            System.exit(exitCode);
    }
  3. Examine the following table of Hadoop job properties and values:

    Property name

    Possible...