Our first script reads in a text file and sees how much the line lengths add up to:
import pyspark if not 'sc' in globals(): sc = pyspark.SparkContext() lines = sc.textFile("Spark File Words.ipynb") lineLengths = lines.map(lambda s: len(s)) totalLength = lineLengths.reduce(lambda a, b: a + b) print(totalLength)
In the script, we are first initializing Spark-only if we have not done already. Spark will complain if you try to initialize it more than once, so all Spark scripts should have this if
prefix statement.
The script reads in a text file (the source of this script), takes every line, and computes its length; then it adds all the lengths together.
A lambda
function is an anonymous (not named) function that takes arguments and returns a value. In the first case, given a string s
, it returns its length.
A reduce
function takes an argument, applies the second argument to it, replaces the first value with the result, and then proceeds with the rest of the list. In...