Book Image

Data Engineering with Apache Spark, Delta Lake, and Lakehouse

By : Manoj Kukreja
5 (2)
Book Image

Data Engineering with Apache Spark, Delta Lake, and Lakehouse

5 (2)
By: Manoj Kukreja

Overview of this book

In the world of ever-changing data and schemas, it is important to build data pipelines that can auto-adjust to changes. This book will help you build scalable data platforms that managers, data scientists, and data analysts can rely on. Starting with an introduction to data engineering, along with its key concepts and architectures, this book will show you how to use Microsoft Azure Cloud services effectively for data engineering. You'll cover data lake design patterns and the different stages through which the data needs to flow in a typical data lake. Once you've explored the main features of Delta Lake to build data lakes with fast performance and governance in mind, you'll advance to implementing the lambda architecture using Delta Lake. Packed with practical examples and code snippets, this book takes you through real-world examples based on production scenarios faced by the author in his 10 years of experience working with big data. Finally, you'll cover data lake deployment strategies that play an important role in provisioning the cloud resources and deploying the data pipelines in a repeatable and continuous way. By the end of this data engineering book, you'll know how to effectively deal with ever-changing data and create scalable data pipelines to streamline data science, ML, and artificial intelligence (AI) tasks.
Table of Contents (17 chapters)
1
Section 1: Modern Data Engineering and Tools
5
Section 2: Data Pipelines and Stages of Data Engineering
11
Section 3: Data Engineering Challenges and Effective Deployment Strategies

The journey of data

Data engineering is the vehicle that makes the journey of data possible, secure, durable, and timely. A data engineer is the driver of this vehicle who safely maneuvers the vehicle around various roadblocks along the way without compromising the safety of its passengers. Waiting at the end of the road are data analysts, data scientists, and business intelligence (BI) engineers who are eager to receive this data and start narrating the story of data. You can see this reflected in the following screenshot:

Figure 1.1 – Data's journey to effective data analysis

Figure 1.1 – Data's journey to effective data analysis

Traditionally, the journey of data revolved around the typical ETL process. Unfortunately, the traditional ETL process is simply not enough in the modern era anymore. Due to the immense human dependency on data, there is a greater need than ever to streamline the journey of data by using cutting-edge architectures, frameworks, and tools.

You may also be wondering why the journey of data is even required. Gone are the days where datasets were limited, computing power was scarce, and the scope of data analytics was very limited. We now live in a fast-paced world where decision-making needs to be done at lightning speeds using data that is changing by the second. Let's look at how the evolution of data analytics has impacted data engineering.