Book Image

Azure Data Engineering Cookbook

By : Ahmad Osama
Book Image

Azure Data Engineering Cookbook

By: Ahmad Osama

Overview of this book

Data engineering is one of the faster growing job areas as Data Engineers are the ones who ensure that the data is extracted, provisioned and the data is of the highest quality for data analysis. This book uses various Azure services to implement and maintain infrastructure to extract data from multiple sources, and then transform and load it for data analysis. It takes you through different techniques for performing big data engineering using Microsoft Azure Data services. It begins by showing you how Azure Blob storage can be used for storing large amounts of unstructured data and how to use it for orchestrating a data workflow. You'll then work with different Cosmos DB APIs and Azure SQL Database. Moving on, you'll discover how to provision an Azure Synapse database and find out how to ingest and analyze data in Azure Synapse. As you advance, you'll cover the design and implementation of batch processing solutions using Azure Data Factory, and understand how to manage, maintain, and secure Azure Data Factory pipelines. You’ll also design and implement batch processing solutions using Azure Databricks and then manage and secure Azure Databricks clusters and jobs. In the concluding chapters, you'll learn how to process streaming data using Azure Stream Analytics and Data Explorer. By the end of this Azure book, you'll have gained the knowledge you need to be able to orchestrate batch and real-time ETL workflows in Microsoft Azure.
Table of Contents (11 chapters)

Processing structured streaming data with Azure Databricks

Streaming data refers to a continuous stream of data from one or more sources, such as IoT devices, application logs, and more. This streaming data can be either be processed record by record or in batches (sliding window) as required. A popular example of stream processing is finding fraudulent credit card transactions as and when they happen.

In this recipe, we'll use Azure Databricks to process customer orders as and when they happen, and then aggregate and save the orders in Azure Synapse SQL pool.

We'll simulate streaming data by reading the orders.csv file and sending the data row by row to an Azure Event hub. We'll then read the events from the Azure Event hub, before processing and storing the aggregated data in an Azure Synapse SQL pool.

Getting ready

To get started, follow these steps:

  1. Log into using your Azure credentials.
  2. You will need an existing Azure...