Book Image

Building ETL Pipelines with Python

By : Brij Kishore Pandey, Emily Ro Schoof
5 (1)
Book Image

Building ETL Pipelines with Python

5 (1)
By: Brij Kishore Pandey, Emily Ro Schoof

Overview of this book

Modern extract, transform, and load (ETL) pipelines for data engineering have favored the Python language for its broad range of uses and a large assortment of tools, applications, and open source components. With its simplicity and extensive library support, Python has emerged as the undisputed choice for data processing. In this book, you’ll walk through the end-to-end process of ETL data pipeline development, starting with an introduction to the fundamentals of data pipelines and establishing a Python development environment to create pipelines. Once you've explored the ETL pipeline design principles and ET development process, you'll be equipped to design custom ETL pipelines. Next, you'll get to grips with the steps in the ETL process, which involves extracting valuable data; performing transformations, through cleaning, manipulation, and ensuring data integrity; and ultimately loading the processed data into storage systems. You’ll also review several ETL modules in Python, comparing their pros and cons when building data pipelines and leveraging cloud tools, such as AWS, to create scalable data pipelines. Lastly, you’ll learn about the concept of test-driven development for ETL pipelines to ensure safe deployments. By the end of this book, you’ll have worked on several hands-on examples to create high-performance ETL pipelines to develop robust, scalable, and resilient environments using Python.
Table of Contents (22 chapters)
1
Part 1:Introduction to ETL, Data Pipelines, and Design Principles
Free Chapter
2
Chapter 1: A Primer on Python and the Development Environment
5
Part 2:Designing ETL Pipelines with Python
11
Part 3:Creating ETL Pipelines in AWS
15
Part 4:Automating and Scaling ETL Pipelines

Scaling for big data packages

In this section, we will look at different tools that will help us with scaling the ETL pipelines for big data packages.

Dask

When faced with processing capacity contingencies, it makes sense to increase your capacity with the assistance of additional devices. This is the premise for the parallelization of tasks in Python. Similar to batch processing, partitioning data (aka separating the data into equal, bite-sized bits), allows large data sources to be processed in identical, synchronous ways across multiple systems.

Figure 3.5: Concept of partitioning data

Figure 3.5: Concept of partitioning data

Dask is the Python library that allows for the parallelization of processing tasks in a flexible and dynamic scope. Dask is a cloud-based module that uses “clusters”, which are extra processing units in the cloud on standby, that can lend a helping hand to heavy-processing tasks initiated on your local device. The creators of Dask designed this parallelization...