Book Image

Data Modeling for Azure Data Services

By : Peter ter Braake
Book Image

Data Modeling for Azure Data Services

By: Peter ter Braake

Overview of this book

Data is at the heart of all applications and forms the foundation of modern data-driven businesses. With the multitude of data-related use cases and the availability of different data services, choosing the right service and implementing the right design becomes paramount to successful implementation. Data Modeling for Azure Data Services starts with an introduction to databases, entity analysis, and normalizing data. The book then shows you how to design a NoSQL database for optimal performance and scalability and covers how to provision and implement Azure SQL DB, Azure Cosmos DB, and Azure Synapse SQL Pool. As you progress through the chapters, you'll learn about data analytics, Azure Data Lake, and Azure SQL Data Warehouse and explore dimensional modeling, data vault modeling, along with designing and implementing a Data Lake using Azure Storage. You'll also learn how to implement ETL with Azure Data Factory. By the end of this book, you'll have a solid understanding of which Azure data services are the best fit for your model and how to implement the best design for your solution.
Table of Contents (16 chapters)
1
Section 1 – Operational/OLTP Databases
8
Section 2 – Analytics with a Data Lake and Data Warehouse
13
Section 3 – ETL with Azure Data Factory

Choosing the proper file size

Azure Data Lake Storage Gen2 and the compute engines from Spark (Databricks and Synapse Analytics Spark pools) as well as Data Factory are optimized to perform better on larger file sizes. When queries need to work with a lot of small files, the extra overhead can quickly become a performance bottleneck.

Apart from performance considerations, reading a 16 MB file is cheaper than reading four 4 MB files. Reading the first block of a file incurs more costs than reading subsequent blocks.

Optimal file sizes are between 64 MB and 1 GB per file. This can be a challenge in the RAW zone where you may not have a lot of control over file sizes.

Good testing to find the optimum number of files and file sizes for the compute you use is key here.

After learning the basics of the setup of a data lake, let's start implementing one by provisioning an Azure storage account.