Book Image

Apache Spark 2.x Cookbook

By : Rishi Yadav
Book Image

Apache Spark 2.x Cookbook

By: Rishi Yadav

Overview of this book

While Apache Spark 1.x gained a lot of traction and adoption in the early years, Spark 2.x delivers notable improvements in the areas of API, schema awareness, Performance, Structured Streaming, and simplifying building blocks to build better, faster, smarter, and more accessible big data applications. This book uncovers all these features in the form of structured recipes to analyze and mature large and complex sets of data. Starting with installing and configuring Apache Spark with various cluster managers, you will learn to set up development environments. Further on, you will be introduced to working with RDDs, DataFrames and Datasets to operate on schema aware data, and real-time streaming with various sources such as Twitter Stream and Apache Kafka. You will also work through recipes on machine learning, including supervised learning, unsupervised learning & recommendation engines in Spark. Last but not least, the final few chapters delve deeper into the concepts of graph processing using GraphX, securing your implementations, cluster optimization, and troubleshooting.
Table of Contents (19 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Deploying Spark on a cluster in standalone mode


Compute resources in a distributed environment need to be managed so that resource utilization is efficient and every job gets a fair chance to run. Spark comes with its own cluster manager, which is conveniently called standalone mode. Spark also supports working with YARN and Mesos cluster managers.

The cluster manager you choose should be mostly driven by both legacy concerns and whether other frameworks, such as MapReduce, share the same compute resource pool. If your cluster has legacy MapReduce jobs running and all of them cannot be converted into Spark jobs, it is a good idea to use YARN as the cluster manager. Mesos is emerging as a data center operating system to conveniently manage jobs across frameworks, and it is very compatible with Spark.

If the Spark framework is the only framework in your cluster, then the standalone mode is good enough. As Spark is evolving as a technology, you will see more and more use cases of Spark being used as the standalone framework, serving all your big data compute needs. For example, some jobs may use Apache Mahout at present because MLlib does not have a specific machine-learning library, which the job needs. As soon as MLlib gets its library, this particular job can be moved to Spark.

Getting ready

Let's consider a cluster of six nodes as an example setup--one master and five slaves (replace them with the actual node names in your cluster):

Master
m1.zettabytes.com
Slaves
s1.zettabytes.com
s2.zettabytes.com
s3.zettabytes.com
s4.zettabytes.com
s5.zettabytes.com

How to do it…

  1. Since Spark's standalone mode is the default, all you need to do is have Spark binaries installed on both master and slave machines. Put /opt/infoobjects/spark/sbin in the path on every node:
$ echo "export PATH=$PATH:/opt/infoobjects/spark/sbin" >> /home/hduser/.bashrc
  1. Start the standalone master server (SSH to master first):
[email protected]~] start-master.sh

Note

Master, by default, starts on port 7077, which slaves use to connect to it. It also has a web UI at port 8088.

 

  1. Connect to the master node using a Secure Shell (SSH) connection and then start the slaves:
[email protected]~] spark-class org.apache.spark.deploy.worker.Worker 
          spark://m1.zettabytes.com:7077

Argument 

Meaning

-h <ipaddress/HOST> and--host <ipaddress/HOST>

IP address/DNS service to listen on

-p <port> and --port <port>

Port for the service to listen on

--webui-port <port>

This is the port for the web UI (by default, 8080 is for the master and 8081 for the worker)

-c <cores> and --cores <cores>

These refer to the total CPU core Spark applications that can be used on a machine (worker only)

-m <memory> and --memory <memory>

These refer to the total RAM Spark applications that can be used on a machine (worker only)

-d <dir> and --work-dir <dir>

These refer to the directory to use for scratch space and job output logs

Note

For fine-grained configuration, the above parameters work with both master and slaves. Rather than manually starting master and slave daemons on each node, it can also be accomplished using cluster launch scripts. Cluster launch scripts are outside the scope of this book. Please refer to books about Chef or Puppet.

  1. First, create the conf/slaves file on a master node and add one line per slave hostname (using an example of five slave nodes, replace the following slave DNS with the DNS of the slave nodes in your cluster):
[email protected]~] echo "s1.zettabytes.com" >> conf/slaves
[email protected]~] echo "s2.zettabytes.com" >> conf/slaves
[email protected]~] echo "s3.zettabytes.com" >> conf/slaves
[email protected]~] echo "s4.zettabytes.com" >> conf/slaves
[email protected]~] echo "s5.zettabytes.com" >> conf/slaves

Once the slave machine is set up, you can call the following scripts to start/stop the cluster:

Script name

Purpose

start-master.sh

Starts a master instance on the host machine

start-slaves.sh

Starts a slave instance on each node of the slaves file

start-all.sh

Starts both the master and slaves

stop-master.sh

Stops the master instance on the host machine

stop-slaves.sh

Stops the slave instance on all the nodes of the slaves file

stop-all.sh

Stops both the master and slaves

  1. Connect an application to the cluster through Scala code:
        val sparkContext = new SparkContext(new 
          SparkConf().setMaster("spark://m1.zettabytes.com:7077")Setting master URL for 
            spark-shell
  1. Connect to the cluster through Spark shell:
$ spark-shell --master spark://master:7077

How it works…

In standalone mode, Spark follows the master-slave architecture, very much like Hadoop, MapReduce, and YARN. The compute master daemon is called Spark master and runs on one master node. Spark master can be made highly available using ZooKeeper. You can also add more standby masters on the fly if needed.

The compute slave daemon is called a worker, and it exists on each slave node. The worker daemon does the following:

  • Reports the availability of the compute resources on a slave node, such as the number of cores, memory, and others, to the Spark master
  • Spawns the executor when asked to do so by the Spark master
  • Restarts the executor if it dies

There is, at most, one executor per application, per slave machine.

Both Spark master and the worker are very lightweight. Typically, memory allocation between 500 MB to 1 GB is sufficient. This value can be set in conf/spark-env.sh by setting the SPARK_DAEMON_MEMORY parameter. For example, the following configuration will set the memory to 1 gigabits for both the master and worker daemon. Make sure you have sudo as the super user before running it:

$ echo "export SPARK_DAEMON_MEMORY=1g" >> /opt/infoobjects/spark/conf/spark-env.sh

By default, each slave node has one worker instance running on it. Sometimes, you may have a few machines that are more powerful than others. In that case, you can spawn more than one worker on that machine with the following configuration (only on those machines):

$ echo "export SPARK_WORKER_INSTANCES=2" >> /opt/infoobjects/spark/conf/spark-env.sh

The Spark worker, by default, uses all the cores on the slave machine for its executors. If you would like to limit the number of cores the worker could use, you can set it to the number of your choice (for example, 12), using the following configuration:

$ echo "export SPARK_WORKER_CORES=12" >> /opt/infoobjects/spark/conf/spark-env.sh

The Spark worker, by default, uses all of the available RAM (1 GB for executors). Note that you cannot allocate how much memory each specific executor will use (you can control this from the driver configuration). To assign another value to the total memory (for example, 24 GB) to be used by all the executors combined, execute the following setting:

$ echo "export SPARK_WORKER_MEMORY=24g" >> /opt/infoobjects/spark/conf/spark-env.sh

There are some settings you can do at the driver level:

  • To specify the maximum number of CPU cores to be used by a given application across the cluster, you can set the spark.cores.max configuration in Spark submit  or  Spark shellas follows:
$ spark-submit --conf spark.cores.max=12
  • To specify the amount of memory that each executor should be allocated (the minimum recommendation is 8 GB), you can set the spark.executor.memory configuration in Spark submit or Spark shell as follows:
$ spark-submit --conf spark.executor.memory=8g

The following diagram depicts the high-level architecture of a Spark cluster:

See also

To find more configuration options, refer to the following URL: