Book Image

Mastering Hadoop 3

By : Chanchal Singh, Manish Kumar
Book Image

Mastering Hadoop 3

By: Chanchal Singh, Manish Kumar

Overview of this book

Apache Hadoop is one of the most popular big data solutions for distributed storage and for processing large chunks of data. With Hadoop 3, Apache promises to provide a high-performance, more fault-tolerant, and highly efficient big data processing platform, with a focus on improved scalability and increased efficiency. With this guide, you’ll understand advanced concepts of the Hadoop ecosystem tool. You’ll learn how Hadoop works internally, study advanced concepts of different ecosystem tools, discover solutions to real-world use cases, and understand how to secure your cluster. It will then walk you through HDFS, YARN, MapReduce, and Hadoop 3 concepts. You’ll be able to address common challenges like using Kafka efficiently, designing low latency, reliable message delivery Kafka systems, and handling high data volumes. As you advance, you’ll discover how to address major challenges when building an enterprise-grade messaging system, and how to use different stream processing systems along with Kafka to fulfil your enterprise goals. By the end of this book, you’ll have a complete understanding of how components in the Hadoop ecosystem are effectively integrated to implement a fast and reliable data pipeline, and you’ll be equipped to tackle a range of real-world problems in data pipelines.
Table of Contents (23 chapters)
Title Page
Dedication
About Packt
Foreword
Contributors
Preface
Index

Common batch processing pattern


Batch processing has been used for many years and there has been some common design problems encountered during batch processing implementation. People have some common design patterns to solve such problems. In this section, we will basically discuss some of the common design patterns and how we solve design problems using common techniques.

Slowly changing dimension

Slowly Changing Dimension (SCD) refers to the concept where some or most part of the data changes at irregular intervals. There are multiple SCD types available and each have different implementations in Hadoop. We will talk about each type and how to deal with it. We will look into Type 1 and Type 2, which are commonly used slowly changing dimensions.

Slowly changing dimensions – type 1

Slowly changing dimensions type 1 implementation overwrites all the old data with new and updated records. It does not maintain history of old data, meaning we will not be able to track the changes if we want to...