Book Image

Mastering Hadoop 3

By : Chanchal Singh, Manish Kumar
Book Image

Mastering Hadoop 3

By: Chanchal Singh, Manish Kumar

Overview of this book

Apache Hadoop is one of the most popular big data solutions for distributed storage and for processing large chunks of data. With Hadoop 3, Apache promises to provide a high-performance, more fault-tolerant, and highly efficient big data processing platform, with a focus on improved scalability and increased efficiency. With this guide, you’ll understand advanced concepts of the Hadoop ecosystem tool. You’ll learn how Hadoop works internally, study advanced concepts of different ecosystem tools, discover solutions to real-world use cases, and understand how to secure your cluster. It will then walk you through HDFS, YARN, MapReduce, and Hadoop 3 concepts. You’ll be able to address common challenges like using Kafka efficiently, designing low latency, reliable message delivery Kafka systems, and handling high data volumes. As you advance, you’ll discover how to address major challenges when building an enterprise-grade messaging system, and how to use different stream processing systems along with Kafka to fulfil your enterprise goals. By the end of this book, you’ll have a complete understanding of how components in the Hadoop ecosystem are effectively integrated to implement a fast and reliable data pipeline, and you’ll be equipped to tackle a range of real-world problems in data pipelines.
Table of Contents (23 chapters)
Title Page
Dedication
About Packt
Foreword
Contributors
Preface
Index

Introduction to benchmarking and profiling


The Hadoop cluster are used by the organizations in different ways. One of the primary ways is to build data lakes on top of the Hadoop cluster. A data lake is built on top of different types of data sources. Each of these data sources varies in nature, such as the type of data or frequency of data. Every type of data processing for those sources in data lakes varies. Some are real-time processing and some are batch-time processing. Your Hadoop cluster on top of which the data lake is built has to take care of such different types of workloads. These workloads are memory intensive, and some are memory as well as CPU intensive. As an organization, it becomes imperative that you benchmark and profile your cluster for these different types of workloads. Another reason for benchmarking and profiling your cluster is that your cluster nodes may have different hardware configurations. For varying workloads, it is important that organizations ensure how...