Book Image

Apache Mesos Essentials

By : Dharmesh Kakadia
Book Image

Apache Mesos Essentials

By: Dharmesh Kakadia

Overview of this book

<p>Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It allows developers to concurrently run the likes of Hadoop, Spark, Storm, and other applications on a dynamically shared pool of nodes. With Mesos, you have the power to manage a wide range of resources in a multi-tenant environment.</p> <p>Starting with the basics, this book will give you an insight into all the features that Mesos has to offer. You will first learn how to set up Mesos in various environments from data centers to the cloud. You will then learn how to implement self-managed Platform as a Service environment with Mesos using various service schedulers, such as Chronos, Aurora, and Marathon. You will then delve into the depths of Mesos fundamentals and learn how to build distributed applications using Mesos primitives.</p> <p>Finally, you will round things off by covering the operational aspects of Mesos including logging, monitoring, high availability, and recovery.</p>
Table of Contents (15 chapters)
Apache Mesos Essentials
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Spark on Mesos


Mesos can act as a cluster manager for Spark. While running Spark on Mesos, Spark leverages all the resource management capabilities of Mesos, and Spark tasks are executed on Mesos worker nodes using the Spark executor. This allows the sharing of resources between multiple instances of Spark or with other frameworks. Let's see how to install Spark on Mesos:

  1. Build and run Mesos, as shown in Chapter 1, Running Mesos.

  2. Download the Spark tar file, which is similar to the steps in the earlier section.

  3. The Spark archive containing executors has to be accessible from Mesos. Typically, we can use Hadoop Distributed File System (HDFS) or Amazon S3. We will use HDFS:

    ubuntu@master:~ $ hadoop fs -mkdir /tmp
    ubuntu@master:~ $ hadoop fs -put spark.tar.gz /tmp
    
  4. Create spark-env.sh from spark-env.sh.template, and add the following three export lines to the file:

    ubuntu@master:~ $ cp spark-env.sh.template spark-env.sh
    
    ubuntu@master:~ $ vim spark-env.sh
    export MESOS_NATIVE_LIBRARY=/usr/local...