Book Image

Data Engineering with Apache Spark, Delta Lake, and Lakehouse

By : Manoj Kukreja
5 (2)
Book Image

Data Engineering with Apache Spark, Delta Lake, and Lakehouse

5 (2)
By: Manoj Kukreja

Overview of this book

In the world of ever-changing data and schemas, it is important to build data pipelines that can auto-adjust to changes. This book will help you build scalable data platforms that managers, data scientists, and data analysts can rely on. Starting with an introduction to data engineering, along with its key concepts and architectures, this book will show you how to use Microsoft Azure Cloud services effectively for data engineering. You'll cover data lake design patterns and the different stages through which the data needs to flow in a typical data lake. Once you've explored the main features of Delta Lake to build data lakes with fast performance and governance in mind, you'll advance to implementing the lambda architecture using Delta Lake. Packed with practical examples and code snippets, this book takes you through real-world examples based on production scenarios faced by the author in his 10 years of experience working with big data. Finally, you'll cover data lake deployment strategies that play an important role in provisioning the cloud resources and deploying the data pipelines in a repeatable and continuous way. By the end of this data engineering book, you'll know how to effectively deal with ever-changing data and create scalable data pipelines to streamline data science, ML, and artificial intelligence (AI) tasks.
Table of Contents (17 chapters)
1
Section 1: Modern Data Engineering and Tools
5
Section 2: Data Pipelines and Stages of Data Engineering
11
Section 3: Data Engineering Challenges and Effective Deployment Strategies

Building the ingestion pipelines

You might recall from previous sections that we decided to create two ingestion pipelines – batch ingestion and streaming ingestion. Each one of these pipelines will be built using a different set of Azure services. For batch ingestion, we will use Azure Data Factory, and for streaming ingestion, we will use Azure Event Hubs Capture. So, let's get going, as we still have a long way to go.

Building a batch ingestion pipeline

Before proceeding with the actual creation of the batch pipeline, let me remind you of a key requirement of the Electroniz lakehouse. Previously, Electroniz stated that the transactions within the sales database and online store happen very frequently throughout the day. They wanted the lakehouse to be kept up to date with the newly created data with a maximum delay of 1 hour.

To satisfy this requirement, we will need to structure the pipeline using the watermark approach. Simply put, a watermark is a column...