Book Image

Azure Data Engineering Cookbook - Second Edition

By : Nagaraj Venkatesan, Ahmad Osama
Book Image

Azure Data Engineering Cookbook - Second Edition

By: Nagaraj Venkatesan, Ahmad Osama

Overview of this book

The famous quote 'Data is the new oil' seems more true every day as the key to most organizations' long-term success lies in extracting insights from raw data. One of the major challenges organizations face in leveraging value out of data is building performant data engineering pipelines for data visualization, ingestion, storage, and processing. This second edition of the immensely successful book by Ahmad Osama brings to you several recent enhancements in Azure data engineering and shares approximately 80 useful recipes covering common scenarios in building data engineering pipelines in Microsoft Azure. You’ll explore recipes from Azure Synapse Analytics workspaces Gen 2 and get to grips with Synapse Spark pools, SQL Serverless pools, Synapse integration pipelines, and Synapse data flows. You’ll also understand Synapse SQL Pool optimization techniques in this second edition. Besides Synapse enhancements, you’ll discover helpful tips on managing Azure SQL Database and learn about security, high availability, and performance monitoring. Finally, the book takes you through overall data engineering pipeline management, focusing on monitoring using Log Analytics and tracking data lineage using Azure Purview. By the end of this book, you’ll be able to build superior data engineering pipelines along with having an invaluable go-to guide.
Table of Contents (16 chapters)

Scheduling notebooks to process data incrementally

Consider the following scenario. Data is loaded daily into the data lake in the form of CSV files. The task is to create a scheduled batch job that processes the files loaded daily, performs basic checks, and loads the data into the Delta table in the lake database. This recipe addresses this scenario by covering the following tasks:

  1. Only reading the new CSV files that are loaded to the data lake daily using Spark pools and notebooks
  2. Processing and performing upserts (update if the row exists, insert if it doesn’t), and loading data into the Delta lake table using notebooks
  3. Scheduling the notebook to operationalize the solution

Getting ready

Create a Synapse Analytics workspace, as explained in the Provisioning an Azure Synapse Analytics workspace recipe in this chapter.

Create a Spark pool, as explained in the Provisioning and configuring Spark pools recipe in this chapter.

Download the TransDtls...