Book Image

Modern Big Data Processing with Hadoop

By : V Naresh Kumar, Manoj R Patil, Prashant Shindgikar
Book Image

Modern Big Data Processing with Hadoop

By: V Naresh Kumar, Manoj R Patil, Prashant Shindgikar

Overview of this book

The complex structure of data these days requires sophisticated solutions for data transformation, to make the information more accessible to the users.This book empowers you to build such solutions with relative ease with the help of Apache Hadoop, along with a host of other Big Data tools. This book will give you a complete understanding of the data lifecycle management with Hadoop, followed by modeling of structured and unstructured data in Hadoop. It will also show you how to design real-time streaming pipelines by leveraging tools such as Apache Spark, and build efficient enterprise search solutions using Elasticsearch. You will learn to build enterprise-grade analytics solutions on Hadoop, and how to visualize your data using tools such as Apache Superset. This book also covers techniques for deploying your Big Data solutions on the cloud Apache Ambari, as well as expert techniques for managing and administering your Hadoop cluster. By the end of this book, you will have all the knowledge you need to build expert Big Data systems.
Table of Contents (12 chapters)

Enterprise Data Architecture Principles

Traditionally, enterprises have embraced data warehouses to store, process, and access large volumes of data. These warehouses are typically large RDBMS databases capable of storing a very-large-scale variety of datasets. As the data complexity, volume, and access patterns have increased, many enterprises have started adopting big data as a model to redesign their data organization and define the necessary policies around it.

This figure depicts how a typical data warehouse looks in an Enterprise:

As Enterprises have many different departments, organizations, and geographies, each one tends to own a warehouse of their own and presents a variety of challenges to the Enterprise as a whole. For example:

  • Multiple sources and destinations of data
  • Data duplication and redundancy
  • Data access regulatory issues
  • Non-standard data definitions across the Enterprise.
  • Software and hardware scalability and reliability issues
  • Data movement and auditing
  • Integration between various warehouses

It is becoming very easy to build very-large-scale systems at less costs compared to what it was a few decades ago due to several advancements in technology, such as:

  • Cost per terabyte
  • Computation power per nanometer
  • Gigabits of network bandwidth
  • Cloud

With globalization, markets have gone global and the consumers are also global. This has increased the reach manifold. These advancements also pose several challenges to the Enterprises in terms of:

  • Human capital management
  • Warehouse management
  • Logistics management
  • Data privacy and security
  • Sales and billing management
  • Understanding demand and supply

In order to stay on top of the demands of the market, Enterprises have started collecting more and more metrics about themselves; thereby, there is an increase in the dimensions data is playing with in the current situation.

In this chapter, we will learn:

  • Data architecture principles
  • The importance of metadata
  • Data governance
  • Data security
  • Data as a Service
  • Data architecture evolution with Hadoop