Book Image

Hands-On Big Data Analytics with PySpark

By : Rudy Lai, Bartłomiej Potaczek
Book Image

Hands-On Big Data Analytics with PySpark

By: Rudy Lai, Bartłomiej Potaczek

Overview of this book

Apache Spark is an open source parallel-processing framework that has been around for quite some time now. One of the many uses of Apache Spark is for data analytics applications across clustered computers. In this book, you will not only learn how to use Spark and the Python API to create high-performance analytics with big data, but also discover techniques for testing, immunizing, and parallelizing Spark jobs. You will learn how to source data from all popular data hosting platforms, including HDFS, Hive, JSON, and S3, and deal with large datasets with PySpark to gain practical big data experience. This book will help you work on prototypes on local machines and subsequently go on to handle messy data in production and at scale. This book covers installing and setting up PySpark, RDD operations, big data cleaning and wrangling, and aggregating and summarizing data into useful reports. You will also learn how to implement some practical and proven techniques to improve certain aspects of programming and administration in Apache Spark. By the end of the book, you will be able to build big data analytical solutions using the various PySpark offerings and also optimize them effectively.
Table of Contents (15 chapters)

Implementing a custom partitioner

In this section, we'll implement a custom partitioner and create a partitioner that takes a list of parses with ranges. If our key falls into a specific range, we will assign the partition number index of the list.

We will cover the following topics:

  • Implementing a custom partitioner
  • Implementing a range partitioner
  • Testing our partitioner

We will implement the logic range partitioning based on our own range partitioning and then test our partitioner. Let's start with the black box test without looking at the implementation.

The first part of the code is similar to what we have used already, but this time we have keyBy amount of data, as shown in the following example:

 val keysWithValuesList =
Array(
UserTransaction("A", 100),
UserTransaction("B", 4),
UserTransaction("A", 100001),
UserTransaction(&quot...