We have already executed spark applications in the Spark standalone resource manager in other sections of this chapter (within the PySpark shell and applications). Let's try to understand how these cluster resource managers are different from each other and when they should be used.
Before moving on to cluster resource managers, let's understand how cluster mode is different from local mode.
It is important to understand the scope and life cycle of variables and methods when executing code across a cluster. Let's look at an example with the foreach
action:
counter = 0 rdd = sc.parallelize(data) rdd.foreach(lambda x: counter += x) print("Counter value: " + counter)
In local mode, the preceding code works fine because the counter variable and RDD are in the same memory space (single JVM).
In cluster mode, the counter value will never change and always remains at 0. In cluster mode, spark computes the closure with...