Book Image

Azure Databricks Cookbook

By : Phani Raj, Vinod Jaiswal
Book Image

Azure Databricks Cookbook

By: Phani Raj, Vinod Jaiswal

Overview of this book

Azure Databricks is a unified collaborative platform for performing scalable analytics in an interactive environment. The Azure Databricks Cookbook provides recipes to get hands-on with the analytics process, including ingesting data from various batch and streaming sources and building a modern data warehouse. The book starts by teaching you how to create an Azure Databricks instance within the Azure portal, Azure CLI, and ARM templates. You’ll work through clusters in Databricks and explore recipes for ingesting data from sources, including files, databases, and streaming sources such as Apache Kafka and EventHub. The book will help you explore all the features supported by Azure Databricks for building powerful end-to-end data pipelines. You'll also find out how to build a modern data warehouse by using Delta tables and Azure Synapse Analytics. Later, you’ll learn how to write ad hoc queries and extract meaningful insights from the data lake by creating visualizations and dashboards with Databricks SQL. Finally, you'll deploy and productionize a data pipeline as well as deploy notebooks and Azure Databricks service using continuous integration and continuous delivery (CI/CD). By the end of this Azure book, you'll be able to use Azure Databricks to streamline different processes involved in building data-driven apps.
Table of Contents (12 chapters)

Simulating a workload for streaming data

In this recipe, you will learn how to simulate the vehicle sensor data that can be sent to Event Hubs. In this recipe, we will be running a Python script that will be generating the sensor data for 10 vehicle IDs and learn how the events can be pushed to Event Hubs for Kafka.

Getting ready

Before you start on this recipe, make sure you have the latest version of Python installed on the machine from which you will be running the Python script. The Python script was tested on Python 3.8.

  • You need to install confluent_kafka libraries by running pip install confluent_kafka from bash, PowerShell, or Command Prompt.
  • You need to have Azure Event Hubs for Kafka, which is mentioned in the previous recipe, Creating required Azure resources for the E2E demonstration. In the mentioned recipe, you can find out how to get the Event Hubs connection string that will be used for the bootstrap.servers notebook variable.
  • The Python script...