Book Image

Data Lake Development with Big Data

Book Image

Data Lake Development with Big Data

Overview of this book

A Data Lake is a highly scalable platform for storing huge volumes of multistructured data from disparate sources with centralized data management services. This book explores the potential of Data Lakes and explores architectural approaches to building data lakes that ingest, index, manage, and analyze massive amounts of data using batch and real-time processing frameworks. It guides you on how to go about building a Data Lake that is managed by Hadoop and accessed as required by other Big Data applications. This book will guide readers (using best practices) in developing Data Lake's capabilities. It will focus on architect data governance, security, data quality, data lineage tracking, metadata management, and semantic data tagging. By the end of this book, you will have a good understanding of building a Data Lake for Big Data.
Table of Contents (13 chapters)

Summary


This chapter has emphasized the need for Data Lake implementation in an enterprise context and has provided a practical guidance on when to go for a Data Lake. We can now comprehend the significant benefits a Data Lake offers over a traditional Data warehouse kind of implementation.

In the later sections, we introduced you to the Data Lake concept and its essentials. We then covered the various layers of a Data Lake and provided an overview of its architectural components and details of how crucial each of these components are in building a successful Data Lake.

In the next chapter, you will understand the Data Intake component and the functionalities that can be enabled for this component. It provides architectural guidance and delves deeper into the various Big Data tools and technologies that can be used for building this component.