Compute resources in a distributed environment need to be managed so that resource utilization is efficient and every job gets a fair chance to run. Spark comes with its own cluster manager, which is conveniently called standalone mode. Spark also supports working with YARN and Mesos cluster managers.
The cluster manager you choose should be mostly driven by both legacy concerns and whether other frameworks, such as MapReduce, share the same compute resource pool. If your cluster has legacy MapReduce jobs running and all of them cannot be converted into Spark jobs, it is a good idea to use YARN as the cluster manager. Mesos is emerging as a data center operating system to conveniently manage jobs across frameworks, and it is very compatible with Spark.
If the Spark framework is the only framework in your cluster, then the standalone mode is good enough. As Spark is evolving as a technology, you will see more and more use cases of Spark being used as the standalone framework, serving all your big data compute needs. For example, some jobs may use Apache Mahout at present because MLlib
does not have a specific machine-learning library, which the job needs. As soon as MLlib
gets its library, this particular job can be moved to Spark.
Let's consider a cluster of six nodes as an example setup--one master and five slaves (replace them with the actual node names in your cluster):
Master m1.zettabytes.com Slaves s1.zettabytes.com s2.zettabytes.com s3.zettabytes.com s4.zettabytes.com s5.zettabytes.com
- Since Spark's standalone mode is the default, all you need to do is have Spark binaries installed on both master and slave machines. Put
/opt/infoobjects/spark/sbin
in the path on every node:
$ echo "export PATH=$PATH:/opt/infoobjects/spark/sbin" >> /home/hduser/.bashrc
- Start the standalone master server (SSH to master first):
[email protected]~] start-master.sh
Note
Master, by default, starts on port 7077
, which slaves use to connect to it. It also has a web UI at port 8088
.
- Connect to the master node using a Secure Shell (SSH) connection and then start the slaves:
[email protected]~] spark-class org.apache.spark.deploy.worker.Worker
spark://m1.zettabytes.com:7077
Argument | Meaning |
| IP address/DNS service to listen on |
| Port for the service to listen on |
| This is the port for the web UI (by default, 8080 is for the master and 8081 for the worker) |
| These refer to the total CPU core Spark applications that can be used on a machine (worker only) |
| These refer to the total RAM Spark applications that can be used on a machine (worker only) |
| These refer to the directory to use for scratch space and job output logs |
Note
For fine-grained configuration, the above parameters work with both master and slaves. Rather than manually starting master and slave daemons on each node, it can also be accomplished using cluster launch scripts. Cluster launch scripts are outside the scope of this book. Please refer to books about Chef or Puppet.
- First, create the
conf/slaves
file on a master node and add one line per slave hostname (using an example of five slave nodes, replace the following slave DNS with the DNS of the slave nodes in your cluster):
[email protected]~] echo "s1.zettabytes.com" >> conf/slaves [email protected]~] echo "s2.zettabytes.com" >> conf/slaves [email protected]~] echo "s3.zettabytes.com" >> conf/slaves [email protected]~] echo "s4.zettabytes.com" >> conf/slaves [email protected]~] echo "s5.zettabytes.com" >> conf/slaves
Once the slave machine is set up, you can call the following scripts to start/stop the cluster:
Script name | Purpose |
| Starts a master instance on the host machine |
| Starts a slave instance on each node of the slaves file |
| Starts both the master and slaves |
| Stops the master instance on the host machine |
| Stops the slave instance on all the nodes of the slaves file |
| Stops both the master and slaves |
- Connect an application to the cluster through
Scala code
:
val sparkContext = new SparkContext(new
SparkConf().setMaster("spark://m1.zettabytes.com:7077")Setting master URL for
spark-shell
- Connect to the cluster through
Spark shell
:
$ spark-shell --master spark://master:7077
In standalone mode, Spark follows the master-slave architecture, very much like Hadoop, MapReduce, and YARN. The compute master daemon is called Spark master and runs on one master node. Spark master can be made highly available using ZooKeeper. You can also add more standby masters on the fly if needed.
The compute slave daemon is called a worker, and it exists on each slave node. The worker daemon does the following:
- Reports the availability of the compute resources on a slave node, such as the number of cores, memory, and others, to the Spark master
- Spawns the executor when asked to do so by the Spark master
- Restarts the executor if it dies
There is, at most, one executor per application, per slave machine.
Both Spark master and the worker are very lightweight. Typically, memory allocation between 500 MB to 1 GB is sufficient. This value can be set in conf/spark-env.sh
by setting the SPARK_DAEMON_MEMORY
parameter. For example, the following configuration will set the memory to 1 gigabits for both the master and worker daemon. Make sure you have sudo
as the super user before running it:
$ echo "export SPARK_DAEMON_MEMORY=1g" >> /opt/infoobjects/spark/conf/spark-env.sh
By default, each slave node has one worker instance running on it. Sometimes, you may have a few machines that are more powerful than others. In that case, you can spawn more than one worker on that machine with the following configuration (only on those machines):
$ echo "export SPARK_WORKER_INSTANCES=2" >> /opt/infoobjects/spark/conf/spark-env.sh
The Spark worker, by default, uses all the cores on the slave machine for its executors. If you would like to limit the number of cores the worker could use, you can set it to the number of your choice (for example, 12), using the following configuration:
$ echo "export SPARK_WORKER_CORES=12" >> /opt/infoobjects/spark/conf/spark-env.sh
The Spark worker, by default, uses all of the available RAM (1 GB for executors). Note that you cannot allocate how much memory each specific executor will use (you can control this from the driver configuration). To assign another value to the total memory (for example, 24 GB) to be used by all the executors combined, execute the following setting:
$ echo "export SPARK_WORKER_MEMORY=24g" >> /opt/infoobjects/spark/conf/spark-env.sh
There are some settings you can do at the driver level:
- To specify the maximum number of CPU cores to be used by a given application across the cluster, you can set the
spark.cores.max
configuration inSpark submit
orSpark shell
as follows:
$ spark-submit --conf spark.cores.max=12
- To specify the amount of memory that each executor should be allocated (the minimum recommendation is 8 GB), you can set the
spark.executor.memory
configuration inSpark submit
orSpark shell
as follows:
$ spark-submit --conf spark.executor.memory=8g
The following diagram depicts the high-level architecture of a Spark cluster:
To find more configuration options, refer to the following URL: