Book Image

Building ETL Pipelines with Python

By : Brij Kishore Pandey, Emily Ro Schoof
5 (1)
Book Image

Building ETL Pipelines with Python

5 (1)
By: Brij Kishore Pandey, Emily Ro Schoof

Overview of this book

Modern extract, transform, and load (ETL) pipelines for data engineering have favored the Python language for its broad range of uses and a large assortment of tools, applications, and open source components. With its simplicity and extensive library support, Python has emerged as the undisputed choice for data processing. In this book, you’ll walk through the end-to-end process of ETL data pipeline development, starting with an introduction to the fundamentals of data pipelines and establishing a Python development environment to create pipelines. Once you've explored the ETL pipeline design principles and ET development process, you'll be equipped to design custom ETL pipelines. Next, you'll get to grips with the steps in the ETL process, which involves extracting valuable data; performing transformations, through cleaning, manipulation, and ensuring data integrity; and ultimately loading the processed data into storage systems. You’ll also review several ETL modules in Python, comparing their pros and cons when building data pipelines and leveraging cloud tools, such as AWS, to create scalable data pipelines. Lastly, you’ll learn about the concept of test-driven development for ETL pipelines to ensure safe deployments. By the end of this book, you’ll have worked on several hands-on examples to create high-performance ETL pipelines to develop robust, scalable, and resilient environments using Python.
Table of Contents (22 chapters)
1
Part 1:Introduction to ETL, Data Pipelines, and Design Principles
Free Chapter
2
Chapter 1: A Primer on Python and the Development Environment
5
Part 2:Designing ETL Pipelines with Python
11
Part 3:Creating ETL Pipelines in AWS
15
Part 4:Automating and Scaling ETL Pipelines

Transformation and data cleansing

As the next step in the pipeline creation tutorial, it is crucial to perform data cleansing tasks on each of the DataFrames to create reliable and trustworthy data for your clients. These tasks are essential for ensuring data quality and reliability. As a team, you decide to perform the following data cleansing tasks on each of the DataFrames:

  1. Remove duplicates: Remove any duplicate rows in each DataFrame, if any, using the drop_duplicates() function:
    df = df.drop_duplicates()
  2. Handle missing values: Check for any missing values in the DataFrames and handle them appropriately. For example, you can replace missing values in numeric columns with the mean and categorical columns with the mode using the fillna() function:
    # Replace missing values in numeric columns with the meandf.fillna(df.mean(), inplace=True)# Replace missing values in categorical columns with the modedf.fillna(df.mode().iloc[0], inplace=True)
  3. Convert data types: Convert...