Book Image

PySpark Cookbook

By : Denny Lee, Tomasz Drabas
Book Image

PySpark Cookbook

By: Denny Lee, Tomasz Drabas

Overview of this book

Apache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault tolerance. The PySpark Cookbook presents effective and time-saving recipes for leveraging the power of Python and putting it to use in the Spark ecosystem. You’ll start by learning the Apache Spark architecture and how to set up a Python environment for Spark. You’ll then get familiar with the modules available in PySpark and start using them effortlessly. In addition to this, you’ll discover how to abstract data with RDDs and DataFrames, and understand the streaming capabilities of PySpark. You’ll then move on to using ML and MLlib in order to solve any problems related to the machine learning capabilities of PySpark and use GraphFrames to solve graph-processing problems. Finally, you will explore how to deploy your applications to the cloud using the spark-submit command. By the end of this book, you will be able to use the Python API for Apache Spark to solve any problems associated with building data-intensive applications.
Table of Contents (13 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

Overview of RDD actions


As noted in preceding sections, there are two types of Apache Spark RDD operations: transformations and actions. An action returns a value to the driver after running a computation on the dataset, typically on the workers. In the preceding recipes, the take() and count() RDD operations are examples of actions.

Getting ready

This recipe will be reading a tab-delimited (or comma-delimited) file, so please ensure that you have a text (or CSV) file available. For your convenience, you can download theairport-codes-na.txt anddeparturedelays.csv files from learning http://bit.ly/2nroHbh. Ensure your local Spark cluster can access this file (~/data/flights/airport-codes-na.txt).

Note

If you are running Databricks, the same file is already included in the /databricks-datasets folder; the command is 

myRDD = sc.textFile('/databricks-datasets/flights/airport-codes-na.txt').map(lambda line: line.split("\t"))

Many of the transformations in the next section will use the RDDs airports...