Book Image

Azure Data Engineer Associate Certification Guide

By : Newton Alex
Book Image

Azure Data Engineer Associate Certification Guide

By: Newton Alex

Overview of this book

Azure is one of the leading cloud providers in the world, providing numerous services for data hosting and data processing. Most of the companies today are either cloud-native or are migrating to the cloud much faster than ever. This has led to an explosion of data engineering jobs, with aspiring and experienced data engineers trying to outshine each other. Gaining the DP-203: Azure Data Engineer Associate certification is a sure-fire way of showing future employers that you have what it takes to become an Azure Data Engineer. This book will help you prepare for the DP-203 examination in a structured way, covering all the topics specified in the syllabus with detailed explanations and exam tips. The book starts by covering the fundamentals of Azure, and then takes the example of a hypothetical company and walks you through the various stages of building data engineering solutions. Throughout the chapters, you'll learn about the various Azure components involved in building the data systems and will explore them using a wide range of real-world use cases. Finally, you’ll work on sample questions and answers to familiarize yourself with the pattern of the exam. By the end of this Azure book, you'll have gained the confidence you need to pass the DP-203 exam with ease and land your dream job in data engineering.
Table of Contents (23 chapters)
1
Part 1: Azure Basics
3
Part 2: Data Storage
10
Part 3: Design and Develop Data Processing (25-30%)
15
Part 4: Design and Implement Data Security (10-15%)
17
Part 5: Monitor and Optimize Data Storage and Data Processing (10-15%)
20
Part 6: Practice Exercises

Implementing partitioning

In Chapter 3, Designing a Partition Strategy, we covered the basics of partitioning. In this section, we will be learning how to implement the different types of partitioning. We will start with partitioning on Azure data stores and then look into partitioning for analytical workloads.

For storage-based partitioning, the main technique is to partition the data into the correct folder structure. In the previous chapters, we learned about how to store the data in a date-based format. The recommendation from Azure is to use the following pattern:

{Region}/{SubjectMatter(s)}/{yyyy}/{mm}/{dd}/{hh}/

Let's learn how to implement folder creation in an automated manner using ADF.

Using ADF/Synapse pipelines to create data partitions

You can use ADF or Synapse Pipelines, as both use the same ADF technology. In this example, I'm using Synapse Pipelines. Let's look at the steps to partition data in an automated fashion:

  1. In your Synapse...