Book Image

Modern Big Data Processing with Hadoop

By : V Naresh Kumar, Manoj R Patil, Prashant Shindgikar
Book Image

Modern Big Data Processing with Hadoop

By: V Naresh Kumar, Manoj R Patil, Prashant Shindgikar

Overview of this book

The complex structure of data these days requires sophisticated solutions for data transformation, to make the information more accessible to the users.This book empowers you to build such solutions with relative ease with the help of Apache Hadoop, along with a host of other Big Data tools. This book will give you a complete understanding of the data lifecycle management with Hadoop, followed by modeling of structured and unstructured data in Hadoop. It will also show you how to design real-time streaming pipelines by leveraging tools such as Apache Spark, and build efficient enterprise search solutions using Elasticsearch. You will learn to build enterprise-grade analytics solutions on Hadoop, and how to visualize your data using tools such as Apache Superset. This book also covers techniques for deploying your Big Data solutions on the cloud Apache Ambari, as well as expert techniques for managing and administering your Hadoop cluster. By the end of this book, you will have all the knowledge you need to build expert Big Data systems.
Table of Contents (12 chapters)

Data architecture principles

Data at the current state can be defined in the following four dimensions (four Vs).


The volume of data is an important measure needed to design a big data system. This is an important factor that decides the investment an Enterprise has to make to cater to the present and future storage requirements.

Different types of data in an enterprise need different capacities to store, archive, and process. Petabyte storage systems are a very common in the industry today, which was almost impossible to reach a few decades ago.


This is another dimension of the data that decides the mobility of data. There exist varieties of data within organizations that fall under the following categories:

  • Streaming data:
    • Real-time/near-real-time data
  • Data at rest:
    • Immutable data
    • Mutable data

This dimension has some impact on the network architecture that Enterprise uses to consume and process data.


This dimension talks about the form and shape of the data. We can further classify this into the following categories:

  • Streaming data:
    • On-wire data format (for example, JSON, MPEG, and Avro)
  • Data At Rest:
    • Immutable data (for example, media files and customer invoices)
    • Mutable data (for example, customer details, product inventory, and employee data)
  • Application data:
    • Configuration files, secrets, passwords, and so on

As an organization, it's very important to embrace very few technologies to reduce the variety of data. Having many different types of data poses a very big challenge to an Enterprise in terms of managing and consuming it all.


This dimension talks about the accuracy of the data. Without having a solid understanding of the guarantee that each system within an Enterprise provides to keep the data safe, available, and reliable, it becomes very difficult to understand the Analytics generated out of this data and to further generate insights.

Necessary auditing should be in place to make sure that the data that flows through the system passes all the quality checks and finally goes through the big data system.

Let's see how a typical big data system looks:

As you can see, many different types of applications are interacting with the big data system to store, process, and generate analytics.