Book Image

Data Lake Development with Big Data

Book Image

Data Lake Development with Big Data

Overview of this book

A Data Lake is a highly scalable platform for storing huge volumes of multistructured data from disparate sources with centralized data management services. This book explores the potential of Data Lakes and explores architectural approaches to building data lakes that ingest, index, manage, and analyze massive amounts of data using batch and real-time processing frameworks. It guides you on how to go about building a Data Lake that is managed by Hadoop and accessed as required by other Big Data applications. This book will guide readers (using best practices) in developing Data Lake's capabilities. It will focus on architect data governance, security, data quality, data lineage tracking, metadata management, and semantic data tagging. By the end of this book, you will have a good understanding of building a Data Lake for Big Data.
Table of Contents (13 chapters)

Understanding Intake tier zones


Enterprises sit on vast reserves of diverse, potentially invaluable data such as databases, social media, logs, and sensor data locked in data silos. The Data Lake is schema-less and stores data of any type and format. Not only storing, but offering the ability to integrate data from disparate sources and ingest high-velocity, multi-structured and massive datasets. This key feature empowers enterprises to perform exploratory and advanced data analysis on all the data and quickly gain actionable insights.

Before the Data Lake is utilized to its hilt to perform data analysis, there should be mechanisms implemented to seamlessly connect the Data Lake to various external data sources and acquire data from them. The Intake Tier in the Data Lake architecture implements the functionalities needed to address this. In the following subsection, let us study the Intake Tier in detail.

Let us start by understanding the data flow from the External Data Sources to the Intake...