Book Image

PySpark Cookbook

By : Denny Lee, Tomasz Drabas
Book Image

PySpark Cookbook

By: Denny Lee, Tomasz Drabas

Overview of this book

Apache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault tolerance. The PySpark Cookbook presents effective and time-saving recipes for leveraging the power of Python and putting it to use in the Spark ecosystem. You’ll start by learning the Apache Spark architecture and how to set up a Python environment for Spark. You’ll then get familiar with the modules available in PySpark and start using them effortlessly. In addition to this, you’ll discover how to abstract data with RDDs and DataFrames, and understand the streaming capabilities of PySpark. You’ll then move on to using ML and MLlib in order to solve any problems related to the machine learning capabilities of PySpark and use GraphFrames to solve graph-processing problems. Finally, you will explore how to deploy your applications to the cloud using the spark-submit command. By the end of this book, you will be able to use the Python API for Apache Spark to solve any problems associated with building data-intensive applications.
Table of Contents (13 chapters)
Title Page
Packt Upsell

Handling duplicates

Duplicates show up in data for many reasons, but sometimes it's really hard to spot them. In this recipe, we will show you how to spot the most common ones and handle them using Spark.

Getting ready

To execute this recipe, you need to have a working Spark environment. If you do not have one, you might want to go back to Chapter 1, Installing and Configuring Spark, and follow the recipes you will find there. 

We will work on the dataset from the introduction. All the code that you will need in this chapter can be found in the GitHub repository we set up for the book: Go to Chapter04 and open the 4.Preparing data for modeling.ipynb notebook. 

No other prerequisites are required.

How to do it...

A duplicate is a record in your dataset that appears more than once. It is an exact copy. Spark DataFrames have a convenience method to remove the duplicated rows, the .dropDuplicates() transformation:

  1. Check whether any rows are duplicated, as follows: