Book Image

Big Data Analysis with Python

By : Ivan Marin, Ankit Shukla, Sarang VK
Book Image

Big Data Analysis with Python

By: Ivan Marin, Ankit Shukla, Sarang VK

Overview of this book

Processing big data in real time is challenging due to scalability, information inconsistency, and fault tolerance. Big Data Analysis with Python teaches you how to use tools that can control this data avalanche for you. With this book, you'll learn practical techniques to aggregate data into useful dimensions for posterior analysis, extract statistical measurements, and transform datasets into features for other systems. The book begins with an introduction to data manipulation in Python using pandas. You'll then get familiar with statistical analysis and plotting techniques. With multiple hands-on activities in store, you'll be able to analyze data that is distributed on several computers by using Dask. As you progress, you'll study how to aggregate data for plots when the entire data cannot be accommodated in memory. You'll also explore Hadoop (HDFS and YARN), which will help you tackle larger datasets. The book also covers Spark and explains how it interacts with other tools. By the end of this book, you'll be able to bootstrap your own Python environment, process large files, and manipulate data to generate statistics, metrics, and graphs.
Table of Contents (11 chapters)
Big Data Analysis with Python
Preface

Chapter 05: Missing Value Handling and Correlation Analysis in Spark


Activity 12: Missing Value Handling and Correlation Analysis with PySpark DataFrames

  1. Import the required libraries and modules in the Jupyter notebook, as illustrated here:

    import findspark
    findspark.init()
    import pyspark
    import random
  2. Set up the SparkContext with the help of the following command in the Jupyter notebook:

    sc = pyspark.SparkContext(appName = "chapter5")
  3. Similarly, set up the SQLContext in the notebook:

    from pyspark.sql import SQLContext
    sqlc = SQLContext(sc)
  4. Now, read the CSV data into a Spark object using the following command:

    df = sqlc.read.format('com.databricks.spark.csv').options(header = 'true', inferschema = 'true').load('iris.csv')
    df.show(5)

    The output is as follows:

    Figure 5.14: Iris DataFrame, reading the CSV data into a Spark object

  5. Fill in the missing values in the Sepallength column with the column's mean.

  6. First, calculate the mean of the Sepallength column using the following command:

    from pyspark.sql.functions import mean
    avg_sl = df.select(mean('Sepallength')).toPandas()['avg(Sepallength)']
  7. Now, impute the missing values in the Sepallength column with the column's mean, as illustrated here:

    y = df
    y = y.na.fill(float(avg_sl),['Sepallength'])
    y.describe().show(1)

    The output is as follows:

    Figure 5.15: Iris DataFrame

  8. Compute the correlation matrix for the dataset. Make sure to import the required modules, as shown here:

    from pyspark.mllib.stat import Statistics
    import pandas as pd
  9. Now, fill the missing values in the DataFrame before computing the correlation:

    z = y.fillna(1)
  10. Next, remove the String columns from the PySpark DataFrame, as illustrated here:

    a = z.drop('Species') 
    features = a.rdd.map(lambda row: row[0:])
  11. Now, compute the correlation matrix in Spark:

    correlation_matrix = Statistics.corr(features, method="pearson")
  12. Next, convert the correlation matrix into a pandas DataFrame using the following command:

    correlation_df = pd.DataFrame(correlation_matrix)
    correlation_df.index, correlation_df.columns = a.columns, a.columns
    correlation_df

    The output is as follows:

    Figure 5.16: Convert the correlation matrix into a pandas DataFrame

  13. Plot the variable pairs showing strong positive correlation and fit a linear line on them.

  14. First, load the data from the Spark DataFrame into a pandas DataFrame:

    import pandas as pd
    dat = y.toPandas()
    type(dat)

    The output is as follows:

    pandas.core.frame.DataFrame
  15. Next, load the required modules and plotting data using the following commands:

    import matplotlib.pyplot as plt
    import seaborn as sns
    %matplotlib inline
    sns.lmplot(x = "Sepallength", y = "Petallength", data = dat)
    plt.show()

    The output is as follows:

    Figure 5.17: Seaborn plot for x = "Sepallength", y = "Petallength"

  16. Plot the graph so that x equals Sepallength, and y equals Petalwidth:

    import seaborn as sns
    sns.lmplot(x = "Sepallength", y = "Petalwidth", data = dat)
    plt.show()

    The output is as follows:

    Figure 5.18: Seaborn plot for x = "Sepallength", y = "Petalwidth"

  17. Plot the graph so that x equals Petalwidth and y equals Petalwidth:

    sns.lmplot(x = "Petallength", y = "Petalwidth", data = dat)
    plt.show()

    The output is as follows:

    Figure 5.19: Seaborn plot for x = "Petallength", y = "Petalwidth"