Book Image

Building ETL Pipelines with Python

By : Brij Kishore Pandey, Emily Ro Schoof
5 (1)
Book Image

Building ETL Pipelines with Python

5 (1)
By: Brij Kishore Pandey, Emily Ro Schoof

Overview of this book

Modern extract, transform, and load (ETL) pipelines for data engineering have favored the Python language for its broad range of uses and a large assortment of tools, applications, and open source components. With its simplicity and extensive library support, Python has emerged as the undisputed choice for data processing. In this book, you’ll walk through the end-to-end process of ETL data pipeline development, starting with an introduction to the fundamentals of data pipelines and establishing a Python development environment to create pipelines. Once you've explored the ETL pipeline design principles and ET development process, you'll be equipped to design custom ETL pipelines. Next, you'll get to grips with the steps in the ETL process, which involves extracting valuable data; performing transformations, through cleaning, manipulation, and ensuring data integrity; and ultimately loading the processed data into storage systems. You’ll also review several ETL modules in Python, comparing their pros and cons when building data pipelines and leveraging cloud tools, such as AWS, to create scalable data pipelines. Lastly, you’ll learn about the concept of test-driven development for ETL pipelines to ensure safe deployments. By the end of this book, you’ll have worked on several hands-on examples to create high-performance ETL pipelines to develop robust, scalable, and resilient environments using Python.
Table of Contents (22 chapters)
1
Part 1:Introduction to ETL, Data Pipelines, and Design Principles
Free Chapter
2
Chapter 1: A Primer on Python and the Development Environment
5
Part 2:Designing ETL Pipelines with Python
11
Part 3:Creating ETL Pipelines in AWS
15
Part 4:Automating and Scaling ETL Pipelines

Discussion – Building flexible applications in AWS

Before moving on to AWS applications used to help automate ETL development, we wanted to pause and discuss how to use AWS. S3 and EC2 instances are two core services offered by AWS that are frequently used together to build scalable and flexible applications in the cloud.

Leveraging S3 and EC2

Together, S3 and EC2 can be used in tandem to create a powerful and flexible platform for instantiating an application that is easily and reliably scalable in the cloud. S3 can be used as a storage backend for EC2 instances, where data can be stored and retrieved using S3 APIs, HTTP, or a CLI. By using S3 as the data source or target for ETL workflows, data can be ingested, processed, and stored in intermittent locations within different S3 buckets (such as staging and archive buckets). EC2 instances can then access the transformed output data from the S3 data directly over your network, without the need to copy or move data across...