Book Image

Building ETL Pipelines with Python

By : Brij Kishore Pandey, Emily Ro Schoof
5 (1)
Book Image

Building ETL Pipelines with Python

5 (1)
By: Brij Kishore Pandey, Emily Ro Schoof

Overview of this book

Modern extract, transform, and load (ETL) pipelines for data engineering have favored the Python language for its broad range of uses and a large assortment of tools, applications, and open source components. With its simplicity and extensive library support, Python has emerged as the undisputed choice for data processing. In this book, you’ll walk through the end-to-end process of ETL data pipeline development, starting with an introduction to the fundamentals of data pipelines and establishing a Python development environment to create pipelines. Once you've explored the ETL pipeline design principles and ET development process, you'll be equipped to design custom ETL pipelines. Next, you'll get to grips with the steps in the ETL process, which involves extracting valuable data; performing transformations, through cleaning, manipulation, and ensuring data integrity; and ultimately loading the processed data into storage systems. You’ll also review several ETL modules in Python, comparing their pros and cons when building data pipelines and leveraging cloud tools, such as AWS, to create scalable data pipelines. Lastly, you’ll learn about the concept of test-driven development for ETL pipelines to ensure safe deployments. By the end of this book, you’ll have worked on several hands-on examples to create high-performance ETL pipelines to develop robust, scalable, and resilient environments using Python.
Table of Contents (22 chapters)
1
Part 1:Introduction to ETL, Data Pipelines, and Design Principles
Free Chapter
2
Chapter 1: A Primer on Python and the Development Environment
5
Part 2:Designing ETL Pipelines with Python
11
Part 3:Creating ETL Pipelines in AWS
15
Part 4:Automating and Scaling ETL Pipelines

Preface

We’re living in an era where the volume of generated data is rapidly outgrowing its practicality in its unprocessed state. In order to gain valuable insights from this data, it needs to be transformed into digestible pieces of information. There is no shortage of quick and easy ways to accomplish this using numerous licensed tools on the market to create “plug-and-play” data ingestion environments. However, the data requirements of industry-level projects often exceed the capabilities of existing tools and technologies. This is because the processing capacity needed to handle large amounts of data increases exponentially, and the cost of processing also increases exponentially. As a result, it can be prohibitively expensive to process the data requirements of industry-level projects using traditional methods.

This growing demand for highly customizable data processing at a reasonable price point goes hand in hand with a growing demand for skilled data engineers. Data engineers handle the extraction, transformation, and loading of data, which is commonly referred to as the Extract, Transform, and Load (ETL) process. ETL workflows, also known as ETL pipelines, enable data engineers to create customized solutions that are not only strategic but also enable developers to create flexible deployment environments that can scale up or down depending on any data requirement fluctuations that occur between pipeline runs.

Popular programming languages, such as SQL, Python, R, and Spark, are some of the most popular languages used to develop custom data solutions. Python, in particular, has emerged as a frontrunner. This is mainly because of its adaptability and how user-friendly it is, making collaboration easier for developers. In simpler terms, think of Python as the “universal tool” in the data world – it’s flexible and people love working with it.

Building ETL Pipelines in Python introduces the fundamentals of data pipelines using open source tools and technologies in Python. It provides a comprehensive guide to creating robust, scalable ETL pipelines broken down into clear and repeatable steps. Our goal for this book is to provide readers with a resource that combines knowledge and practical application to encourage the pursuit of a career in data.

Our aim with this book is to offer you a comprehensive guide as you explore the diverse tools and technologies Python provides to create customized data pipelines. By the time you finish reading, you will have first-hand experience developing robust, scalable, and resilient pipelines using Python. These pipelines can seamlessly transition into a production environment, often without needing further adjustments.

We are excited to embark on this learning journey with you, sharing insights and expertise that can empower you to transform the way you approach data pipeline development. Let’s get to it!