Book Image

Big Data Analysis with Python

By : Ivan Marin, Ankit Shukla, Sarang VK
Book Image

Big Data Analysis with Python

By: Ivan Marin, Ankit Shukla, Sarang VK

Overview of this book

Processing big data in real time is challenging due to scalability, information inconsistency, and fault tolerance. Big Data Analysis with Python teaches you how to use tools that can control this data avalanche for you. With this book, you'll learn practical techniques to aggregate data into useful dimensions for posterior analysis, extract statistical measurements, and transform datasets into features for other systems. The book begins with an introduction to data manipulation in Python using pandas. You'll then get familiar with statistical analysis and plotting techniques. With multiple hands-on activities in store, you'll be able to analyze data that is distributed on several computers by using Dask. As you progress, you'll study how to aggregate data for plots when the entire data cannot be accommodated in memory. You'll also explore Hadoop (HDFS and YARN), which will help you tackle larger datasets. The book also covers Spark and explains how it interacts with other tools. By the end of this book, you'll be able to bootstrap your own Python environment, process large files, and manipulate data to generate statistics, metrics, and graphs.
Table of Contents (11 chapters)
Big Data Analysis with Python
Preface

SQL Operations on a Spark DataFrame


A DataFrame in Spark is a distributed collection of rows and columns. It is the same as a table in a relational database or an Excel sheet. A Spark RDD/DataFrame is efficient at processing large amounts of data and has the ability to handle petabytes of data, whether structured or unstructured.

Spark optimizes queries on data by organizing the DataFrame into columns, which helps Spark understand the schema. Some of the most frequently used SQL operations include subsetting the data, merging the data, filtering, selecting specific columns, dropping columns, dropping all null values, and adding new columns, among others.

Exercise 48: Reading Data in PySpark and Carrying Out SQL Operations

For summary statistics of data, we can use the spark_df.describe().show() function, which will provide information on count, mean, standard deviation, max, and min for all the columns in the DataFrame.

For example, in the dataset that we have considered—the bank marketing...