Book Image

Data Lake Development with Big Data

Book Image

Data Lake Development with Big Data

Overview of this book

A Data Lake is a highly scalable platform for storing huge volumes of multistructured data from disparate sources with centralized data management services. This book explores the potential of Data Lakes and explores architectural approaches to building data lakes that ingest, index, manage, and analyze massive amounts of data using batch and real-time processing frameworks. It guides you on how to go about building a Data Lake that is managed by Hadoop and accessed as required by other Big Data applications. This book will guide readers (using best practices) in developing Data Lake's capabilities. It will focus on architect data governance, security, data quality, data lineage tracking, metadata management, and semantic data tagging. By the end of this book, you will have a good understanding of building a Data Lake for Big Data.
Table of Contents (13 chapters)

Understanding Data Integration


In this section, let us dive deeper into and understand the underlying concepts of Data Integration.

Introduction to Data Integration

In the previous chapter, we saw that the data in the Intake Tier is in its native format with no operations performed on the data to check for its validity.

To make real use of this newly acquired data in its native format, it has to be combined or integrated with the historical data assets residing within the Enterprise Data Centre to improve the success of gathering analytical insights.

The practical goal of Data Integration is to provide a unified view through a single access point to all the data that is residing or that can be accessed by the Data Lake. Without this capability, there would be chaos, due to the multiple access points of the data; without this capability, organizations cannot integrate multiple sources, enrich it, and deliver it to data consumers rapidly to maximize its competitive advantage.

Data Lake's Integration...