Book Image

Building ETL Pipelines with Python

By : Brij Kishore Pandey, Emily Ro Schoof
5 (1)
Book Image

Building ETL Pipelines with Python

5 (1)
By: Brij Kishore Pandey, Emily Ro Schoof

Overview of this book

Modern extract, transform, and load (ETL) pipelines for data engineering have favored the Python language for its broad range of uses and a large assortment of tools, applications, and open source components. With its simplicity and extensive library support, Python has emerged as the undisputed choice for data processing. In this book, you’ll walk through the end-to-end process of ETL data pipeline development, starting with an introduction to the fundamentals of data pipelines and establishing a Python development environment to create pipelines. Once you've explored the ETL pipeline design principles and ET development process, you'll be equipped to design custom ETL pipelines. Next, you'll get to grips with the steps in the ETL process, which involves extracting valuable data; performing transformations, through cleaning, manipulation, and ensuring data integrity; and ultimately loading the processed data into storage systems. You’ll also review several ETL modules in Python, comparing their pros and cons when building data pipelines and leveraging cloud tools, such as AWS, to create scalable data pipelines. Lastly, you’ll learn about the concept of test-driven development for ETL pipelines to ensure safe deployments. By the end of this book, you’ll have worked on several hands-on examples to create high-performance ETL pipelines to develop robust, scalable, and resilient environments using Python.
Table of Contents (22 chapters)
1
Part 1:Introduction to ETL, Data Pipelines, and Design Principles
Free Chapter
2
Chapter 1: A Primer on Python and the Development Environment
5
Part 2:Designing ETL Pipelines with Python
11
Part 3:Creating ETL Pipelines in AWS
15
Part 4:Automating and Scaling ETL Pipelines

Avoiding SPOFs

A SPOF in a data pipeline is a part of the system that, if it fails, will stop the entire system from working. SPOFs can severely impact the reliability and availability of your pipeline, leading to data processing delays, loss of data, and disruptions in downstream analytics. Avoiding SPOFs involves implementing redundancy and fault tolerance in your data pipeline design. Redundancy means having backup resources to take over if the primary resource fails. Fault tolerance involves designing the system to continue operation, even in a degraded state, when some part of the system fails.

Using the same logger instance as before, let’s add some redundancy to the extract() function of our demo data pipeline. To do this, we create two extract functions: extract_from_source1() and extract_from_source2(). Both functions import the same data source, but the second function is only run if the first function fails:

def extract():     try:
...