Book Image

Data Lake Development with Big Data

Book Image

Data Lake Development with Big Data

Overview of this book

A Data Lake is a highly scalable platform for storing huge volumes of multistructured data from disparate sources with centralized data management services. This book explores the potential of Data Lakes and explores architectural approaches to building data lakes that ingest, index, manage, and analyze massive amounts of data using batch and real-time processing frameworks. It guides you on how to go about building a Data Lake that is managed by Hadoop and accessed as required by other Big Data applications. This book will guide readers (using best practices) in developing Data Lake's capabilities. It will focus on architect data governance, security, data quality, data lineage tracking, metadata management, and semantic data tagging. By the end of this book, you will have a good understanding of building a Data Lake for Big Data.
Table of Contents (13 chapters)

Summary


This chapter explained the Data Intake layer in detail; we started with understanding the various zones in the Intake tier and the external sources from which the data can be acquired as per your use case. We then took a deep dive into the functionalities of the Source System Zone, Transient landing Zone, and the Raw Zone and also comprehended the best practices that can be considered while architecting the Data Intake tier.

In the subsequent sections, we took a look at the various Big Data tools and technologies that can be used to acquire different types of data from various sources. The architectural guidance section helped you in decision making in order to arrive at the set of technologies that can be used for specific use cases.

In the next chapter, you will understand the Data Integration, Quality, and Enrichment zones; it will take you through the key functionalities of these zones and provides architectural guidance on how to go about implementing these zones.