Book Image

Mastering Hadoop 3

By : Chanchal Singh, Manish Kumar
Book Image

Mastering Hadoop 3

By: Chanchal Singh, Manish Kumar

Overview of this book

Apache Hadoop is one of the most popular big data solutions for distributed storage and for processing large chunks of data. With Hadoop 3, Apache promises to provide a high-performance, more fault-tolerant, and highly efficient big data processing platform, with a focus on improved scalability and increased efficiency. With this guide, you’ll understand advanced concepts of the Hadoop ecosystem tool. You’ll learn how Hadoop works internally, study advanced concepts of different ecosystem tools, discover solutions to real-world use cases, and understand how to secure your cluster. It will then walk you through HDFS, YARN, MapReduce, and Hadoop 3 concepts. You’ll be able to address common challenges like using Kafka efficiently, designing low latency, reliable message delivery Kafka systems, and handling high data volumes. As you advance, you’ll discover how to address major challenges when building an enterprise-grade messaging system, and how to use different stream processing systems along with Kafka to fulfil your enterprise goals. By the end of this book, you’ll have a complete understanding of how components in the Hadoop ecosystem are effectively integrated to implement a fast and reliable data pipeline, and you’ll be equipped to tackle a range of real-world problems in data pipelines.
Table of Contents (23 chapters)
Title Page
Dedication
About Packt
Foreword
Contributors
Preface
Index

Defining HDFS


HDFS is designed to run on a cluster of commodity hardware. It is a fault-tolerant, scalable File System that handles the failure of nodes without data and can scale up horizontally to any number of nodes. The initial goal of HDFS was to serve large data files with high read and write performance.

The following are a few essentialfeatures of HDFS:

  • Fault tolerance: Downtime due to machine failure or data loss could result in a huge loss to a company; therefore, the companies want a highly available fault-tolerant system. HDFS is designed to handle failures and ensures data availability with corrective and preventive actions. Files stored in HDFS are split into small chunks and each chunk is referred to as a block. Each block is either 64 MB or 128 MB, depending on the configuration. Blocks are replicated across clusters based on the replication factor. This means that if the replication factor is three, then the block will be replicated to three machines. This assures that, if...