Book Image

Hands-On Big Data Analytics with PySpark

By : Rudy Lai, Bartłomiej Potaczek
Book Image

Hands-On Big Data Analytics with PySpark

By: Rudy Lai, Bartłomiej Potaczek

Overview of this book

Apache Spark is an open source parallel-processing framework that has been around for quite some time now. One of the many uses of Apache Spark is for data analytics applications across clustered computers. In this book, you will not only learn how to use Spark and the Python API to create high-performance analytics with big data, but also discover techniques for testing, immunizing, and parallelizing Spark jobs. You will learn how to source data from all popular data hosting platforms, including HDFS, Hive, JSON, and S3, and deal with large datasets with PySpark to gain practical big data experience. This book will help you work on prototypes on local machines and subsequently go on to handle messy data in production and at scale. This book covers installing and setting up PySpark, RDD operations, big data cleaning and wrangling, and aggregating and summarizing data into useful reports. You will also learn how to implement some practical and proven techniques to improve certain aspects of programming and administration in Apache Spark. By the end of the book, you will be able to build big data analytical solutions using the various PySpark offerings and also optimize them effectively.
Table of Contents (15 chapters)

Manipulating DataFrames with Spark SQL schemas

In this section, we will learn more about DataFrames and learn how to use Spark SQL.

The Spark SQL interface is very simple. For this reason, taking away labels means that we are in unsupervised learning territory. Also, Spark has great support for clustering and dimensionality reduction algorithms. We can tackle learning problems effectively by using Spark SQL to give big data a structure.

Let's take a look at the code that we will be using in our Jupyter Notebook. To maintain consistency, we will be using the same KDD cup data:

  1. We will first type textFile into a raw_data variable as follows:
raw_data = sc.textFile("./kddcup.data.gz")
  1. What's new here is that we are importing two new packages from pyspark.sql:
    • Row
    • SQLContext
  2. The following code shows us how to import these packages:
from pyspark.sql import...