Book Image

Scaling Big Data with Hadoop and Solr, Second Edition

By : Hrishikesh Vijay Karambelkar
Book Image

Scaling Big Data with Hadoop and Solr, Second Edition

By: Hrishikesh Vijay Karambelkar

Overview of this book

Table of Contents (13 chapters)
Scaling Big Data with Hadoop and Solr Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Running Hadoop


Before setting up the HDFS, we must ensure that Hadoop is configured for the pseudo-distributed mode, as per the previous section, that is, Configuring Hadoop. Set up the JAVA_HOME and HADOOP_PREFIX environment variables in your profile before you proceed. To set up a single node configuration, first you will be required to format the underlying HDFS file system; this can be done by running the following command:

  $ $HADOOP_PREFIX/bin/hdfs namenode –format

Once the formatting is complete, simply try running HDFS with the following command:

  $ $HADOOP_PREFIX/sbin/start-dfs.sh

The start-dfs.sh script file will start the name node, data node, and secondary name node on your machine through ssh. The Hadoop daemon log output is written to the $HADOOP_LOG_DIR folder, which by default points to $HADOOP_HOME/logs. Once the Hadoop daemon starts running, you will find three different processes running when you check the snapshot of the running processes. Now, browse the web interface for the NameNode; by default, it is available at http://localhost:50070/. You will see a web page similar to the one shown as follows with the HDFS information:

Once the HDFS is set and started, you can use all Hadoop commands to perform file system operations. The next job is to start the MapReduce framework, which includes the node manager and RM. This can be done by running the following command:

  $ $HADOOP_PREFIX/bin/start-yarn.sh

You can access the RM web page by accessing http://localhost:8088/. The following screenshot shows a newly set-up Hadoop RM page.

We are good to use this Hadoop setup for development now.

Note

Safe Mode

When a cluster is started, NameNode starts its complete functionality only when the configured minimum percentage of blocks satisfies the minimum replication. Otherwise, it goes into safe mode. When NameNode is in the safe mode state, it does not allow any modification to its file systems. This mode can be turned off manually by running the following command:

$ Hadoop dfsadmin – safemode leave

You can test the instance by running the following commands:

This command will create a test folder, so you need to ensure that this folder is not present on a server instance:

$ bin/Hadoop dfs –mkdir /test

This will create a folder. Now, load some files by using the following command:

$ bin/Hadoop dfs -put <file-location> test/input

Now, run the shipped example of wordcount that is packaged with the Hadoop deployment:

$ bin/Hadoop jar share/Hadoop/mapreduce/Hadoop-mapreduce-examples-2.5.1.jar test/input test/output

A successful run will create the output in HDFS's test/output/part-r-00000 file. You can view the output by downloading this file from HDFS to a local machine.