Book Image

Apache Spark 2.x Cookbook

By : Rishi Yadav
Book Image

Apache Spark 2.x Cookbook

By: Rishi Yadav

Overview of this book

While Apache Spark 1.x gained a lot of traction and adoption in the early years, Spark 2.x delivers notable improvements in the areas of API, schema awareness, Performance, Structured Streaming, and simplifying building blocks to build better, faster, smarter, and more accessible big data applications. This book uncovers all these features in the form of structured recipes to analyze and mature large and complex sets of data. Starting with installing and configuring Apache Spark with various cluster managers, you will learn to set up development environments. Further on, you will be introduced to working with RDDs, DataFrames and Datasets to operate on schema aware data, and real-time streaming with various sources such as Twitter Stream and Apache Kafka. You will also work through recipes on machine learning, including supervised learning, unsupervised learning & recommendation engines in Spark. Last but not least, the final few chapters delve deeper into the concepts of graph processing using GraphX, securing your implementations, cluster optimization, and troubleshooting.
Table of Contents (19 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Installing Spark from binaries


You can build Spark from the source code, or you can download precompiled binaries from http://spark.apache.org. For a standard use case, binaries are good enough, and this recipe will focus on installing Spark using binaries.

Getting ready

At the time of writing, Spark's current version is 2.1. Please check the latest version from Spark's download page at http://spark.apache.org/downloads.html. Binaries are developed with the most recent and stable version of Hadoop. To use a specific version of Hadoop, the recommended approach is that you build it from sources, which we will cover in the next recipe.

All the recipes in this book are developed using Ubuntu Linux, but they should work fine on any POSIX environment. Spark expects Java to be installed and the JAVA_HOME environment variable set.

In Linux/Unix systems, there are certain standards for the location of files and directories, which we are going to follow in this book. The following is a quick cheat sheet:

Directory

Description

/bin

This stores essential command binaries

/etc

This is where host-specific system configurations are located

/opt

This is where add-on application software packages are located

/var

This is where variable data is located

/tmp

This stores the temporary files

/home

This is where user home directories are located

How to do it…

Here are the installation steps:

  1. Open the terminal and download the binaries using the following command:
$ wget http://d3kbcqa49mib13.cloudfront.net/spark-2.1.0-bin-hadoop2.7.tgz
  1. Unpack the binaries:
$ tar -zxf spark-2.1.0-bin-hadoop2.7.tgz
  1. Rename the folder containing the binaries by stripping the version information:
$ sudo mv spark-2.1.0-bin-hadoop2.7 spark
  1. Move the configuration folder to the /etc folder so that it can be turned into a symbolic link later:
$ sudo mv spark/conf/* /etc/spark
  1. Create your company-specific installation directory under /opt. As the recipes in this book are tested on the infoobjects sandbox, use infoobjects as the directory name. Create the /opt/infoobjects directory:
$ sudo mkdir -p /opt/infoobjects
  1. Move the spark directory to /opt/infoobjects, as it's an add-on software package:
$ sudo mv spark /opt/infoobjects/
  1. Change the permissions of the spark home directory, namely 0755 = user:read-write-execute group:read-execute world:read-execute:
$ sudo chmod -R 755 /opt/infoobjects/spark
  1. Move to the spark home directory:
$ cd /opt/infoobjects/spark
  1. Create the symbolic link:
$ sudo ln -s /etc/spark conf
  1. Append Spark binaries path to PATH in .bashrc:
$ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
  1. Open a new terminal.
  2. Create the log directory in /var:
$ sudo mkdir -p /var/log/spark
  1. Make hduser the owner of Spark's log directory:
$ sudo chown -R hduser:hduser /var/log/spark
  1. Create Spark's tmp directory:
$ mkdir /tmp/spark
  1. Configure Spark with the help of the following command lines:
     $ cd /etc/spark
     $ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop" >> spark-env.sh
     $ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop" >> spark-env.sh
     $ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh
     $ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh
  1. Change the ownership of the spark home directory to root:
$ sudo chown -R root:root /opt/infoobjects/spark