Book Image

Data Engineering with Apache Spark, Delta Lake, and Lakehouse

By : Manoj Kukreja
5 (2)
Book Image

Data Engineering with Apache Spark, Delta Lake, and Lakehouse

5 (2)
By: Manoj Kukreja

Overview of this book

In the world of ever-changing data and schemas, it is important to build data pipelines that can auto-adjust to changes. This book will help you build scalable data platforms that managers, data scientists, and data analysts can rely on. Starting with an introduction to data engineering, along with its key concepts and architectures, this book will show you how to use Microsoft Azure Cloud services effectively for data engineering. You'll cover data lake design patterns and the different stages through which the data needs to flow in a typical data lake. Once you've explored the main features of Delta Lake to build data lakes with fast performance and governance in mind, you'll advance to implementing the lambda architecture using Delta Lake. Packed with practical examples and code snippets, this book takes you through real-world examples based on production scenarios faced by the author in his 10 years of experience working with big data. Finally, you'll cover data lake deployment strategies that play an important role in provisioning the cloud resources and deploying the data pipelines in a repeatable and continuous way. By the end of this data engineering book, you'll know how to effectively deal with ever-changing data and create scalable data pipelines to streamline data science, ML, and artificial intelligence (AI) tasks.
Table of Contents (17 chapters)
1
Section 1: Modern Data Engineering and Tools
5
Section 2: Data Pipelines and Stages of Data Engineering
11
Section 3: Data Engineering Challenges and Effective Deployment Strategies

Configuring data destinations

Once the batch and streaming ingestion pipelines have been invoked, they will fetch data from the data sources and dump the results into the data destination. The data destination for the bronze layer is Azure Data Lake Storage Gen2:

  1. We will now use the Azure client to create an Azure Data Lake Storage account. Copy each of the following commands, line by line, and paste them inside the Cloud Shell window. Then, press Enter:
    STORAGEACCOUNTNAME="traininglakehouse"
    RESOURCEGROUPNAME="training_rg"
    LOCATION="eastus"
    az storage account create --name $STORAGEACCOUNTNAME --resource-group $RESOURCEGROUPNAME --kind StorageV2 --location $LOCATION  --hns true --sku Standard_LRS --tags owner=data engineering project=lakehouse environment=development

    If the preceding commands are successful, you should see an output that looks like this:

    Figure 5.19 – Output of the creation of the Data Lake Storage account

  2. You...