Book Image

Data Engineering with Python

By : Paul Crickard
Book Image

Data Engineering with Python

By: Paul Crickard

Overview of this book

Data engineering provides the foundation for data science and analytics, and forms an important part of all businesses. This book will help you to explore various tools and methods that are used for understanding the data engineering process using Python. The book will show you how to tackle challenges commonly faced in different aspects of data engineering. You’ll start with an introduction to the basics of data engineering, along with the technologies and frameworks required to build data pipelines to work with large datasets. You’ll learn how to transform and clean data and perform analytics to get the most out of your data. As you advance, you'll discover how to work with big data of varying complexity and production databases, and build data pipelines. Using real-world examples, you’ll build architectures on which you’ll learn how to deploy data pipelines. By the end of this Python book, you’ll have gained a clear understanding of data modeling techniques, and will be able to confidently build data engineering pipelines for tracking data, running quality checks, and making necessary changes in production.
Table of Contents (21 chapters)
1
Section 1: Building Data Pipelines – Extract Transform, and Load
8
Section 2:Deploying Data Pipelines in Production
14
Section 3:Beyond Batch – Building Real-Time Data Pipelines

Building idempotent data pipelines

A crucial feature of a production data pipeline is that it is idempotent. Idempotent is defined as denoting an element of a set that is unchanged in value when multiplied or otherwise operated on by itself.

In data science, this means that when your pipeline fails, which is not a matter of if, but when, it can be rerun and the results are the same. Or, if you accidently click run on your pipeline three times in a row by mistake, there are not duplicate records – even if you accidently click run multiple times in a row.

In Chapter 3, Reading and Writing Files, you created a data pipeline that generated 1,000 records of people and put that data in an Elasticsearch database. If you let that pipeline run every 5 minutes, you would have 2,000 records after 10 minutes. In this example, the records are all random and you may be OK. But what if the records were rows queried from another system?

Every time the pipeline runs, it would insert...