Book Image

Azure Data Engineer Associate Certification Guide

By : Newton Alex
Book Image

Azure Data Engineer Associate Certification Guide

By: Newton Alex

Overview of this book

Azure is one of the leading cloud providers in the world, providing numerous services for data hosting and data processing. Most of the companies today are either cloud-native or are migrating to the cloud much faster than ever. This has led to an explosion of data engineering jobs, with aspiring and experienced data engineers trying to outshine each other. Gaining the DP-203: Azure Data Engineer Associate certification is a sure-fire way of showing future employers that you have what it takes to become an Azure Data Engineer. This book will help you prepare for the DP-203 examination in a structured way, covering all the topics specified in the syllabus with detailed explanations and exam tips. The book starts by covering the fundamentals of Azure, and then takes the example of a hypothetical company and walks you through the various stages of building data engineering solutions. Throughout the chapters, you'll learn about the various Azure components involved in building the data systems and will explore them using a wide range of real-world use cases. Finally, you’ll work on sample questions and answers to familiarize yourself with the pattern of the exam. By the end of this Azure book, you'll have gained the confidence you need to pass the DP-203 exam with ease and land your dream job in data engineering.
Table of Contents (23 chapters)
1
Part 1: Azure Basics
3
Part 2: Data Storage
10
Part 3: Design and Develop Data Processing (25-30%)
15
Part 4: Design and Implement Data Security (10-15%)
17
Part 5: Monitor and Optimize Data Storage and Data Processing (10-15%)
20
Part 6: Practice Exercises

Choosing the right file types for analytical queries

In the previous section, we discussed the three file formats—Avro, Parquet, and ORC—in detail. Any format that supports fast reads is better for analytics workloads. So, naturally, column-based formats such as Parquet and ORC fit the bill.

Based on the five core areas that we compared before (read, write, compression, schema evolution, and the ability to split files for parallel processing) and your choice of processing technologies, such as Hive or Spark, you could select ORC or Parquet.

For example, consider the following:

  • If you have Hive- or Presto-based workloads, go with ORC.
  • If you have Spark- or Drill-based workloads, go with Parquet.

Now that you understand the different types of file formats available and the ones to use for analytical workloads, let's move on to the next topic: designing for efficient querying.