Book Image

Data Modeling for Azure Data Services

By : Peter ter Braake
Book Image

Data Modeling for Azure Data Services

By: Peter ter Braake

Overview of this book

Data is at the heart of all applications and forms the foundation of modern data-driven businesses. With the multitude of data-related use cases and the availability of different data services, choosing the right service and implementing the right design becomes paramount to successful implementation. Data Modeling for Azure Data Services starts with an introduction to databases, entity analysis, and normalizing data. The book then shows you how to design a NoSQL database for optimal performance and scalability and covers how to provision and implement Azure SQL DB, Azure Cosmos DB, and Azure Synapse SQL Pool. As you progress through the chapters, you'll learn about data analytics, Azure Data Lake, and Azure SQL Data Warehouse and explore dimensional modeling, data vault modeling, along with designing and implementing a Data Lake using Azure Storage. You'll also learn how to implement ETL with Azure Data Factory. By the end of this book, you'll have a solid understanding of which Azure data services are the best fit for your model and how to implement the best design for your solution.
Table of Contents (16 chapters)
1
Section 1 – Operational/OLTP Databases
8
Section 2 – Analytics with a Data Lake and Data Warehouse
13
Section 3 – ETL with Azure Data Factory

Using hash keys

Hash keys were introduced in Data Vault 2.0 and play a central role in the design. One advantage of using hash keys is that both Hubs and Links can (and often will) have a composite key. This makes the keys large and inefficient. By creating a single hash value, the key becomes more efficient. This is not true for Hubs with small keys. In that case, hash keys are likely to be more inefficient. But there are other arguments. One other argument is that we want all tables to have the same structure. So, we also use a hash key, even when that is less efficient.

The most important advantage of using hash keys is the efficiency gain of the load process. To explain this, consider a star schema with surrogate keys. You need to load the dimension tables first. The surrogate keys are created during the insert of the new dimension rows. After all the dimensions have finished loading, you can start to load the fact table. The ETL process, which gets fact rows with source keys...