Book Image

Data Modeling for Azure Data Services

By : Peter ter Braake
Book Image

Data Modeling for Azure Data Services

By: Peter ter Braake

Overview of this book

Data is at the heart of all applications and forms the foundation of modern data-driven businesses. With the multitude of data-related use cases and the availability of different data services, choosing the right service and implementing the right design becomes paramount to successful implementation. Data Modeling for Azure Data Services starts with an introduction to databases, entity analysis, and normalizing data. The book then shows you how to design a NoSQL database for optimal performance and scalability and covers how to provision and implement Azure SQL DB, Azure Cosmos DB, and Azure Synapse SQL Pool. As you progress through the chapters, you'll learn about data analytics, Azure Data Lake, and Azure SQL Data Warehouse and explore dimensional modeling, data vault modeling, along with designing and implementing a Data Lake using Azure Storage. You'll also learn how to implement ETL with Azure Data Factory. By the end of this book, you'll have a solid understanding of which Azure data services are the best fit for your model and how to implement the best design for your solution.
Table of Contents (16 chapters)
1
Section 1 – Operational/OLTP Databases
8
Section 2 – Analytics with a Data Lake and Data Warehouse
13
Section 3 – ETL with Azure Data Factory

Creating multiple storage accounts

You can provision as many storage accounts as you like. There are four high-level considerations when it comes to choosing how many storage accounts you need:

  • DTAP
  • Data diversity
  • Cost sensitivity
  • Management overhead

Let's look at each of them in a little more detail.

Considering DTAP

DTAP stands for development, testing, acceptance, and production. Each stands for a separate environment used in a separate stage in the creation of, and working with, a data lake. You probably need to create some code to move data from the source systems into the data lake and then through the different data lake zones. This is a work in progress that should be separated from business processes working with data already in the data lake. You will most likely want to have small test datasets to create the logic and workflow to save development time.

After new software is written, it must be tested. You may have separate datasets...