Book Image

Spark Cookbook

By : Rishi Yadav
Book Image

Spark Cookbook

By: Rishi Yadav

Overview of this book

Table of Contents (19 chapters)
Spark Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Deploying on a cluster with Mesos


Mesos is slowly emerging as a data center operating system to manage all compute resources across a data center. Mesos runs on any computer running the Linux operating system. Mesos is built using the same principles as Linux kernel. Let's see how we can install Mesos.

How to do it...

Mesosphere provides a binary distribution of Mesos. The most recent package for the Mesos distribution can be installed from the Mesosphere repositories by performing the following steps:

  1. Execute Mesos on Ubuntu OS with the trusty version:

    $ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]') CODENAME=$(lsb_release -cs)
    $ sudo vi /etc/apt/sources.list.d/mesosphere.list
    
    deb http://repos.mesosphere.io/Ubuntu trusty main
    
  2. Update the repositories:

    $ sudo apt-get -y update
    
  3. Install Mesos:

    $ sudo apt-get -y install mesos
    
  4. To connect Spark to Mesos to integrate Spark with Mesos, make Spark binaries available to Mesos and configure the Spark driver to connect to Mesos.

  5. Use Spark binaries from the first recipe and upload to HDFS:

    $ 
    hdfs dfs
     -put spark-1.4.0-bin-hadoop2.4.tgz spark-1.4.0-bin-hadoop2.4.tgz
    
  6. The master URL for single master Mesos is mesos://host:5050, and for the ZooKeeper managed Mesos cluster, it is mesos://zk://host:2181.

  7. Set the following variables in spark-env.sh:

    $ sudo vi spark-env.sh
    export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so
    export SPARK_EXECUTOR_URI= hdfs://localhost:9000/user/hduser/spark-1.4.0-bin-hadoop2.4.tgz
    
  8. Run from the Scala program:

    val conf = new SparkConf().setMaster("mesos://host:5050")
    val sparkContext = new SparkContext(conf)
    
  9. Run from the Spark shell:

    $ spark-shell --master mesos://host:5050
    

    Note

    Mesos has two run modes:

    Fine-grained: In fine-grained (default) mode, every Spark task runs as a separate Mesos task

    Coarse-grained: This mode will launch only one long-running Spark task on each Mesos machine

  10. To run in the coarse-grained mode, set the spark.mesos.coarse property:

    conf.set("spark.mesos.coarse","true")