Book Image

Big Data Analysis with Python

By : Ivan Marin, Ankit Shukla, Sarang VK
Book Image

Big Data Analysis with Python

By: Ivan Marin, Ankit Shukla, Sarang VK

Overview of this book

Processing big data in real time is challenging due to scalability, information inconsistency, and fault tolerance. Big Data Analysis with Python teaches you how to use tools that can control this data avalanche for you. With this book, you'll learn practical techniques to aggregate data into useful dimensions for posterior analysis, extract statistical measurements, and transform datasets into features for other systems. The book begins with an introduction to data manipulation in Python using pandas. You'll then get familiar with statistical analysis and plotting techniques. With multiple hands-on activities in store, you'll be able to analyze data that is distributed on several computers by using Dask. As you progress, you'll study how to aggregate data for plots when the entire data cannot be accommodated in memory. You'll also explore Hadoop (HDFS and YARN), which will help you tackle larger datasets. The book also covers Spark and explains how it interacts with other tools. By the end of this book, you'll be able to bootstrap your own Python environment, process large files, and manipulate data to generate statistics, metrics, and graphs.
Table of Contents (11 chapters)
Big Data Analysis with Python
Preface

Chapter 03: Working with Big Data Frameworks


Activity 8: Parsing Text

  1. Read the text files into the Spark object using the text method:

    rdd_df = spark.read.text("/localdata/myfile.txt").rdd

    To parse the file that we are reading, we will use lambda functions and Spark operations such as map, flatMap, and reduceByKey. flatmap applies a function to all elements of an RDD, flattens the results, and returns the transformed RDD. reduceByKey merges the values based on the given key, combining the values. With these functions, we can count the number of lines and words in the text.

  2. Extract the lines from the text using the following command:

    lines = rdd_df.map(lambda line: line[0])
  3. This splits each line in the file as an entry in the list. To check the result, you can use the collect method, which gathers all data back to the driver process:

    lines.collect()
  4. Now, let's count the number of lines, using the count method:

    lines.count()

    Note

    Be careful when using the collect method! If the DataFrame or RDD being collected is larger than the memory of the local driver, Spark will throw an error.

  1. Now, let's first split each line into words, breaking it by the space around it, and combining all elements, removing words in uppercase:

    splits = lines.flatMap(lambda x: x.split(' '))
    lower_splits = splits.map(lambda x: x.lower())
  2. Let's also remove the stop words. We could use a more consistent stop words list from NLTK, but for now, we will row our own:

    stop_words = ['of', 'a', 'and', 'to']
  3. Use the following command to remove the stop words from our token list:

    tokens = lower_splits.filter(lambda x: x and x not in stop_words)

    We can now process our token list and count the unique words. The idea is to generate a list of tuples, where the first element is the token and the second element is the count of that particular token.

  4. First, let's map our token to a list:

    token_list = tokens.map(lambda x: [x, 1])
  5. Use the reduceByKey operation, which will apply the operation to each of the lists:

    count = token_list.reduceByKey(add).sortBy(lambda x: x[1], ascending=False)
    count.collect()

Remember, collect all data back to the driver node! Always check whether there is enough memory by using tools such as top and htop.