Book Image

Azure Data Engineering Cookbook - Second Edition

By : Nagaraj Venkatesan, Ahmad Osama
Book Image

Azure Data Engineering Cookbook - Second Edition

By: Nagaraj Venkatesan, Ahmad Osama

Overview of this book

The famous quote 'Data is the new oil' seems more true every day as the key to most organizations' long-term success lies in extracting insights from raw data. One of the major challenges organizations face in leveraging value out of data is building performant data engineering pipelines for data visualization, ingestion, storage, and processing. This second edition of the immensely successful book by Ahmad Osama brings to you several recent enhancements in Azure data engineering and shares approximately 80 useful recipes covering common scenarios in building data engineering pipelines in Microsoft Azure. You’ll explore recipes from Azure Synapse Analytics workspaces Gen 2 and get to grips with Synapse Spark pools, SQL Serverless pools, Synapse integration pipelines, and Synapse data flows. You’ll also understand Synapse SQL Pool optimization techniques in this second edition. Besides Synapse enhancements, you’ll discover helpful tips on managing Azure SQL Database and learn about security, high availability, and performance monitoring. Finally, the book takes you through overall data engineering pipeline management, focusing on monitoring using Log Analytics and tracking data lineage using Azure Purview. By the end of this book, you’ll be able to build superior data engineering pipelines along with having an invaluable go-to guide.
Table of Contents (16 chapters)

Optimizing Delta tables in a Synapse Spark pool lake database

As covered in the Processing data using Spark pools and lake databases recipe of Chapter 8, Processing Data Using Azure Synapse Analytics, a lake database allows you to store processed data in Delta tables, which are powered by Parquet files. Delta tables are very suitable for storing processed data that can be consumed by reporting solutions such as Power BI.

To achieve optimal performance in Delta tables, it is essential to evenly distribute the data among the Parquet files and purge the unwanted ones. The OPTIMIZE command helps optimally distribute the data among Parquet files, while the VACUUM command purges redundant Parquet files from the Azure Data Lake filesystem. The OPTIMIZE and VACUUM commands need to be executed regularly on the lake database so that you have optimal performance for the queries run against Delta tables.

In this recipe, we will be writing a script that can scan all Delta tables, optimize...