Book Image

Data Engineering with Python

By : Paul Crickard
Book Image

Data Engineering with Python

By: Paul Crickard

Overview of this book

Data engineering provides the foundation for data science and analytics, and forms an important part of all businesses. This book will help you to explore various tools and methods that are used for understanding the data engineering process using Python. The book will show you how to tackle challenges commonly faced in different aspects of data engineering. You’ll start with an introduction to the basics of data engineering, along with the technologies and frameworks required to build data pipelines to work with large datasets. You’ll learn how to transform and clean data and perform analytics to get the most out of your data. As you advance, you'll discover how to work with big data of varying complexity and production databases, and build data pipelines. Using real-world examples, you’ll build architectures on which you’ll learn how to deploy data pipelines. By the end of this Python book, you’ll have gained a clear understanding of data modeling techniques, and will be able to confidently build data engineering pipelines for tracking data, running quality checks, and making necessary changes in production.
Table of Contents (21 chapters)
1
Section 1: Building Data Pipelines – Extract Transform, and Load
8
Section 2:Deploying Data Pipelines in Production
14
Section 3:Beyond Batch – Building Real-Time Data Pipelines

Deploying a data pipeline in production

In the previous chapter, you learned how to deploy data to production, so I will not go into any great depth here, but merely provide a review. To put the new data pipeline into production, perform the following steps:

  1. Browse to your production NiFi instance. I have another instance of NiFi running on port 8080 on localhost.
  2. Drag and drop processor groups to the canvas and select Import. Choose the latest version of the processor groups you just built.
  3. Modify the variables on the processor groups to point to the database production. The table names can stay the same.

You can then run the data pipeline and you will see that the data is populated in the production database staging and warehouse tables.

The data pipeline you just built read files from a data lake, put them into a database table, ran a query to validate the table, and then inserted them into the warehouse. You could have built this data pipeline with a handful...