Spark can be run using the built-in standalone cluster scheduler in the local mode. This means that all the Spark processes are run within the same JVM—effectively, a single, multithreaded instance of Spark. The local mode is very useful for prototyping, development, debugging, and testing. However, this mode can also be useful in real-world scenarios to perform parallel computation across multiple cores on a single computer.
As Spark's local mode is fully compatible with the cluster mode, programs written and tested locally can be run on a cluster with just a few additional steps.
The first step in setting up Spark locally is to download the latest version (at the time of writing this book, the version is 1.2.0). The download page of the Spark project website, found at http://spark.apache.org/downloads.html, contains links to download various versions as well as to obtain the latest source code via GitHub.
Tip
The Spark project documentation website at http://spark.apache.org/docs/latest/ is a comprehensive resource to learn more about Spark. We highly recommend that you explore it!
Spark needs to be built against a specific version of Hadoop in order to access Hadoop Distributed File System (HDFS) as well as standard and custom Hadoop input sources. The download page provides prebuilt binary packages for Hadoop 1, CDH4 (Cloudera's Hadoop Distribution), MapR's Hadoop distribution, and Hadoop 2 (YARN). Unless you wish to build Spark against a specific Hadoop version, we recommend that you download the prebuilt Hadoop 2.4 package from an Apache mirror using this link: http://www.apache.org/dyn/closer.cgi/spark/spark-1.2.0/spark-1.2.0-bin-hadoop2.4.tgz.
Spark requires the Scala programming language (version 2.10.4 at the time of writing this book) in order to run. Fortunately, the prebuilt binary package comes with the Scala runtime packages included, so you don't need to install Scala separately in order to get started. However, you will need to have a Java Runtime Environment (JRE) or Java Development Kit (JDK) installed (see the software and hardware list in this book's code bundle for installation instructions).
Once you have downloaded the Spark binary package, unpack the contents of the package and change into the newly created directory by running the following commands:
>tar xfvz spark-1.2.0-bin-hadoop2.4.tgz >cd spark-1.2.0-bin-hadoop2.4
Spark places user scripts to run Spark in the bin
directory. You can test whether everything is working correctly by running one of the example programs included in Spark:
>./bin/run-example org.apache.spark.examples.SparkPi
This will run the example in Spark's local standalone mode. In this mode, all the Spark processes are run within the same JVM, and Spark uses multiple threads for parallel processing. By default, the preceding example uses a number of threads equal to the number of cores available on your system. Once the program is finished running, you should see something similar to the following lines near the end of the output:
… 14/11/27 20:58:47 INFO SparkContext: Job finished: reduce at SparkPi.scala:35, took 0.723269 s Pi is roughly 3.1465 …
To configure the level of parallelism in the local mode, you can pass in a master
parameter of the local[N]
form, where N
is the number of threads to use. For example, to use only two threads, run the following command instead:
>MASTER=local[2] ./bin/run-example org.apache.spark.examples.SparkPi