Book Image

Apache Hadoop 3 Quick Start Guide

By : Hrishikesh Vijay Karambelkar
Book Image

Apache Hadoop 3 Quick Start Guide

By: Hrishikesh Vijay Karambelkar

Overview of this book

Apache Hadoop is a widely used distributed data platform. It enables large datasets to be efficiently processed instead of using one large computer to store and process the data. This book will get you started with the Hadoop ecosystem, and introduce you to the main technical topics, including MapReduce, YARN, and HDFS. The book begins with an overview of big data and Apache Hadoop. Then, you will set up a pseudo Hadoop development environment and a multi-node enterprise Hadoop cluster. You will see how the parallel programming paradigm, such as MapReduce, can solve many complex data processing problems. The book also covers the important aspects of the big data software development lifecycle, including quality assurance and control, performance, administration, and monitoring. You will then learn about the Hadoop ecosystem, and tools such as Kafka, Sqoop, Flume, Pig, Hive, and HBase. Finally, you will look at advanced topics, including real time streaming using Apache Storm, and data analytics using Apache Spark. By the end of the book, you will be well versed with different configurations of the Hadoop 3 cluster.
Table of Contents (10 chapters)

How HDFS works

When we set up a Hadoop cluster, Hadoop creates a virtual layer on top of your local filesystem (such as a Windows- or Linux-based filesystem). As you might have noticed, HDFS does not map to any physical filesystem on operating system, but Hadoop offers abstraction on top of your Local FS to provide a fault-tolerant distributed filesystem service with HDFS. The overall design and access pattern in HDFS is like a Linux-based filesystem. The following diagram shows the high-level architecture of HDFS:

We have covered NameNode, Secondary Name, and DataNode in Chapter 1, Hadoop 3.0 - Background and Introduction Each file sent to HDFS is sliced into a number of blocks that need to be distributed. The NameNode maintains the registry (or name table) of all of the nodes present in the data in the local filesystem path specified with dfs.namenode.name.dir in hdfs-site...