Book Image

Data Lake Development with Big Data

Book Image

Data Lake Development with Big Data

Overview of this book

A Data Lake is a highly scalable platform for storing huge volumes of multistructured data from disparate sources with centralized data management services. This book explores the potential of Data Lakes and explores architectural approaches to building data lakes that ingest, index, manage, and analyze massive amounts of data using batch and real-time processing frameworks. It guides you on how to go about building a Data Lake that is managed by Hadoop and accessed as required by other Big Data applications. This book will guide readers (using best practices) in developing Data Lake's capabilities. It will focus on architect data governance, security, data quality, data lineage tracking, metadata management, and semantic data tagging. By the end of this book, you will have a good understanding of building a Data Lake for Big Data.
Table of Contents (13 chapters)

Understanding the Data Consumption tier


Let us now understand the Data Consumption tier. We will start by taking a look at how the traditional approaches fail when dealing with Big Data discovery and consumption and how a Data Lake excels in this case. The subsequent sections will take you through Data Discovery and Data Provisioning in detail.

Data Consumption – Traditional versus Data Lake

The need for Data Consumption has grown more complex as enterprises are sitting on the vast reserves of potentially valuable but undiscovered data. With traditional EDW systems, the approach for finding data from disparate sources has largely been manual, inefficient, and time-consuming. The existing BI tools tried to address this by adding various data integration features, but they essentially provide visibility only into a miniscule portion of the data. The questions meant to explore the data have to be defined upfront; these models fall apart in the Big Data age where it is very difficult to ascertain...