Book Image

Data Engineering with Python

By : Paul Crickard
Book Image

Data Engineering with Python

By: Paul Crickard

Overview of this book

Data engineering provides the foundation for data science and analytics, and forms an important part of all businesses. This book will help you to explore various tools and methods that are used for understanding the data engineering process using Python. The book will show you how to tackle challenges commonly faced in different aspects of data engineering. You’ll start with an introduction to the basics of data engineering, along with the technologies and frameworks required to build data pipelines to work with large datasets. You’ll learn how to transform and clean data and perform analytics to get the most out of your data. As you advance, you'll discover how to work with big data of varying complexity and production databases, and build data pipelines. Using real-world examples, you’ll build architectures on which you’ll learn how to deploy data pipelines. By the end of this Python book, you’ll have gained a clear understanding of data modeling techniques, and will be able to confidently build data engineering pipelines for tracking data, running quality checks, and making necessary changes in production.
Table of Contents (21 chapters)
1
Section 1: Building Data Pipelines – Extract Transform, and Load
8
Section 2:Deploying Data Pipelines in Production
14
Section 3:Beyond Batch – Building Real-Time Data Pipelines

Installing and configuring PySpark

PySpark is installed with Spark. You can see it in the ~/spark3/bin directory, as well as other libraries and tools. To configure PySpark to run, you need to export environment variables. The variables are shown here:

export SPARK_HOME=/home/paulcrickard/spark3
export PATH=$SPARK_HOME/bin:$PATH
export PYSPARK_PYTHON=python3 

The preceding command set the SPARK_HOME variable. This will be where you installed Spark. I have pointed the variable to the head of the Spark cluster because the node would really be on another machine. Then, it adds SPARK_HOME to your path. This means that when you type a command, the operating system will look for it in the directories specified in your path, so now it will search ~/spark3/bin, which is where PySpark lives.

Running the preceding commands in a terminal will allow Spark to run while the terminal is open. You will have to rerun these commands every time. To make them permanent, you can add the commands...