Book Image

Data Lake Development with Big Data

Book Image

Data Lake Development with Big Data

Overview of this book

A Data Lake is a highly scalable platform for storing huge volumes of multistructured data from disparate sources with centralized data management services. This book explores the potential of Data Lakes and explores architectural approaches to building data lakes that ingest, index, manage, and analyze massive amounts of data using batch and real-time processing frameworks. It guides you on how to go about building a Data Lake that is managed by Hadoop and accessed as required by other Big Data applications. This book will guide readers (using best practices) in developing Data Lake's capabilities. It will focus on architect data governance, security, data quality, data lineage tracking, metadata management, and semantic data tagging. By the end of this book, you will have a good understanding of building a Data Lake for Big Data.
Table of Contents (13 chapters)

Architectural guidance


This section attempts to answer one key question: which tool should be used for which use case and how do we decide the best option?

As evidenced in the previous sections, there are a plethora of options available for establishing connectivity to External Data Sources and Ingesting data into the Data Lake's Intake Tier.

Choosing a tool for ingestion depends primarily on the use case you are attempting to implement using the Data Lake. Many implementations of Data Lake end up using multiple tools together to acquire and process data. We also see that the market is flooded with umpteen numbers of tools that make decision making very difficult.

In this section, we will try to provide a crisp overview of the things to be considered while selecting a tool. This is by no means an exhaustive coverage, but it tries to give you enough depth so that you will be in a decent position to extrapolate using this knowledge and make better decisions.

The choice of the tool invariably starts...