Book Image

Hadoop MapReduce Cookbook

By : Srinath Perera, Thilina Gunarathne
Book Image

Hadoop MapReduce Cookbook

By: Srinath Perera, Thilina Gunarathne

Overview of this book

<p>We are facing an avalanche of data. The unstructured data we gather can contain many insights that might hold the key to business success or failure. Harnessing the ability to analyze and process this data with Hadoop MapReduce is one of the most highly sought after skills in today's job market.<br /><br />"Hadoop MapReduce Cookbook" is a one-stop guide to processing large and complex data sets using the Hadoop ecosystem. The book introduces you to simple examples and then dives deep to solve in-depth big data use cases.</p> <p>"Hadoop MapReduce Cookbook" presents more than 50 ready-to-use Hadoop MapReduce recipes in a simple and straightforward manner, with step-by-step instructions and real world examples.<br /><br />Start with how to install, then configure, extend, and administer Hadoop. Then write simple examples, learn MapReduce patterns, harness the Hadoop landscape, and finally jump to the cloud.<br /><br />The book deals with many exciting topics such as setting up Hadoop security, using MapReduce to solve analytics, classifications, on-line marketing, recommendations, and searching use cases. You will learn how to harness components from the Hadoop ecosystem including HBase, Hadoop, Pig, and Mahout, then learn how to set up cloud environments to perform Hadoop MapReduce computations.<br /><br />"Hadoop MapReduce Cookbook" teaches you how process large and complex data sets using real examples providing a comprehensive guide to get things done using Hadoop MapReduce.</p>
Table of Contents (17 chapters)
Hadoop MapReduce Cookbook
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Setting HDFS block size


HDFS stores files across the cluster by breaking them down in to coarser grained, fixed-size blocks. The default HDFS block size is 64 MB. The block size of a data product can affect the performance of the filesystem operations where larger block sizes would be more effective, if you are storing and processing very large files. The block size of a data product can affect the performance of MapReduce computations, as the default behavior of Hadoop is to create one map task for each data block of the input files.

How to do it...

  1. To use the NameNode configuration file to set the HDFS block size, add or modify the following in the $HADOOP_HOME/conf/hdfs-site.xml. Block size is provided using the number of bytes. This change would not change the block size of the files that are already in the HDFS. Only the files copied after the change will have the new block size.

    <property>
      <name>dfs.block.size</name>
      <value>134217728</value>
    </property...