Book Image

Data Lake Development with Big Data

Book Image

Data Lake Development with Big Data

Overview of this book

A Data Lake is a highly scalable platform for storing huge volumes of multistructured data from disparate sources with centralized data management services. This book explores the potential of Data Lakes and explores architectural approaches to building data lakes that ingest, index, manage, and analyze massive amounts of data using batch and real-time processing frameworks. It guides you on how to go about building a Data Lake that is managed by Hadoop and accessed as required by other Big Data applications. This book will guide readers (using best practices) in developing Data Lake's capabilities. It will focus on architect data governance, security, data quality, data lineage tracking, metadata management, and semantic data tagging. By the end of this book, you will have a good understanding of building a Data Lake for Big Data.
Table of Contents (13 chapters)

Data Provisioning and metadata


The Data Lake provides easy accessibility to the data in its raw and transformed form; this increases data sharing across the organization where internal or external data consumers can make use of the data. The process of providing the data from the Data Lake to downstream systems is referred to as Data Provisioning; it provides data consumers with secure access to the data assets in the Data Lake and allows them to source this data. Data delivery, access, and egress are all synonyms of Data Provisioning and can be used in this context.

The metadata captured as the data, moves from ingestion through integration to consumption, is used by the data consumers to identify the origins of the data, the various transformations that were applied on it, and to understand the data structurally and semantically. Metadata in the Provisioning Zone is used to identify the data available for consumption, validity of access, subscription details of the data, the consumers who...