Book Image

Fast Data Processing Systems with SMACK Stack

By : Raúl Estrada
Book Image

Fast Data Processing Systems with SMACK Stack

By: Raúl Estrada

Overview of this book

SMACK is an open source full stack for big data architecture. It is a combination of Spark, Mesos, Akka, Cassandra, and Kafka. This stack is the newest technique developers have begun to use to tackle critical real-time analytics for big data. This highly practical guide will teach you how to integrate these technologies to create a highly efficient data analysis system for fast data processing. We’ll start off with an introduction to SMACK and show you when to use it. First you’ll get to grips with functional thinking and problem solving using Scala. Next you’ll come to understand the Akka architecture. Then you’ll get to know how to improve the data structure architecture and optimize resources using Apache Spark. Moving forward, you’ll learn how to perform linear scalability in databases with Apache Cassandra. You’ll grasp the high throughput distributed messaging systems using Apache Kafka. We’ll show you how to build a cheap but effective cluster infrastructure with Apache Mesos. Finally, you will deep dive into the different aspect of SMACK using a few case studies. By the end of the book, you will be able to integrate all the components of the SMACK stack and use them together to achieve highly effective and fast data processing.
Table of Contents (15 chapters)
Fast Data Processing Systems with SMACK Stack
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface

Apache Spark on Apache Mesos


Here we explain in detail how to run Apache Spark on Mesos.

We have two options:

  • We need the Spark binary uploaded to a place accessible by Mesos and Spark driver configured to connect to Mesos
  • Install Spark in the same location in all the Mesos slaves and set Spark.Mesos.executor.home to point to this specific location

Follow these steps for the first option:

The first time that we run a Mesos task on the Mesos slave, this slave must have the Spark binary package to run the Executor in backend. The location accessible by Mesos could be HDFS, HTTP, or S3.

At the time of writing this book, Spark's version is 2.0.0. To download and upload it on HDFS we use the following command:

$ wget http://apache.mirrors.ionfish.org/spark/spark-2.0.0/spark- 
    2.0.0-bin-hadoop2.6.tgz
$ hadoop fs -put spark-2.0.0-bin-hadoop2.7.tgz /

In the Spark driver program, give the master URL as the Apache Mesos master URL in the form:

  • For a single Mesos master cluster: Mesos://master-host:5050...