Book Image

Data Lake Development with Big Data

Book Image

Data Lake Development with Big Data

Overview of this book

A Data Lake is a highly scalable platform for storing huge volumes of multistructured data from disparate sources with centralized data management services. This book explores the potential of Data Lakes and explores architectural approaches to building data lakes that ingest, index, manage, and analyze massive amounts of data using batch and real-time processing frameworks. It guides you on how to go about building a Data Lake that is managed by Hadoop and accessed as required by other Big Data applications. This book will guide readers (using best practices) in developing Data Lake's capabilities. It will focus on architect data governance, security, data quality, data lineage tracking, metadata management, and semantic data tagging. By the end of this book, you will have a good understanding of building a Data Lake for Big Data.
Table of Contents (13 chapters)

The current and future trends


In this section, let us explore where we stand and the current state of things with respect to the Data Lake and explore how the evolving enterprise landscape could potentially use Data Lake to enhance their competitiveness.

As I write this book, I evidenced that the usage of Data Lake was being adopted quickly. The know-how about new features, use cases, and new systems that are integrated with the Data Lake are being pushed into the public domain by a variety of industries and researchers at regular intervals. These developments would have tremendous impact on the way the architecture of the Data Lake evolves over time.

In the current scheme of things, Data Lake implementations across enterprises are dominated by Hadoop being predominantly used as a technology of choice for storing huge volumes of data and running algorithms in batch mode using the MapReduce paradigm. Hadoop has become a go-to tool for integrating and extracting better insights by combining...