Book Image

Azure Databricks Cookbook

By : Phani Raj, Vinod Jaiswal
Book Image

Azure Databricks Cookbook

By: Phani Raj, Vinod Jaiswal

Overview of this book

Azure Databricks is a unified collaborative platform for performing scalable analytics in an interactive environment. The Azure Databricks Cookbook provides recipes to get hands-on with the analytics process, including ingesting data from various batch and streaming sources and building a modern data warehouse. The book starts by teaching you how to create an Azure Databricks instance within the Azure portal, Azure CLI, and ARM templates. You’ll work through clusters in Databricks and explore recipes for ingesting data from sources, including files, databases, and streaming sources such as Apache Kafka and EventHub. The book will help you explore all the features supported by Azure Databricks for building powerful end-to-end data pipelines. You'll also find out how to build a modern data warehouse by using Delta tables and Azure Synapse Analytics. Later, you’ll learn how to write ad hoc queries and extract meaningful insights from the data lake by creating visualizations and dashboards with Databricks SQL. Finally, you'll deploy and productionize a data pipeline as well as deploy notebooks and Azure Databricks service using continuous integration and continuous delivery (CI/CD). By the end of this Azure book, you'll be able to use Azure Databricks to streamline different processes involved in building data-driven apps.
Table of Contents (12 chapters)

Constraints in Delta tables

Delta tables support constraints to ensure data integrity and quality. When the constraint conditions are not met by the data then InvariantViolationException is thrown, and data is not added to the table.

The following types of constraints are supported by the Delta table:

  • CHECK: Evaluate a given Boolean expression for each input row.
  • NOT NULL: Check that the column under constraint does not have a null value.

Getting ready

Before starting we need to ensure that we have a valid subscription with contributor access, a Databricks workspace, and an ADLS Gen2 storage account.

You can follow along by running the steps in the 6_6.Constraints notebook in the https://github.com/PacktPublishing/Azure-Databricks-Cookbook/blob/main/Chapter06/6_6.Constraints.ipynb. Upload the file to Customer folder in the rawdata container in your ADLS Gen-2 account which is mounted.

CustomerwithNullCName.csv file has 4 rows and one row has C_NAME as...