Book Image

Apache Spark 2.x Cookbook

By : Rishi Yadav
Book Image

Apache Spark 2.x Cookbook

By: Rishi Yadav

Overview of this book

While Apache Spark 1.x gained a lot of traction and adoption in the early years, Spark 2.x delivers notable improvements in the areas of API, schema awareness, Performance, Structured Streaming, and simplifying building blocks to build better, faster, smarter, and more accessible big data applications. This book uncovers all these features in the form of structured recipes to analyze and mature large and complex sets of data. Starting with installing and configuring Apache Spark with various cluster managers, you will learn to set up development environments. Further on, you will be introduced to working with RDDs, DataFrames and Datasets to operate on schema aware data, and real-time streaming with various sources such as Twitter Stream and Apache Kafka. You will also work through recipes on machine learning, including supervised learning, unsupervised learning & recommendation engines in Spark. Last but not least, the final few chapters delve deeper into the concepts of graph processing using GraphX, securing your implementations, cluster optimization, and troubleshooting.
Table of Contents (19 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Launching Spark on Amazon EC2


Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute instances in the cloud. Amazon EC2 provides the following features:

  • On-demand delivery of IT resources via the Internet
  • Provisioning of as many instances as you like
  • Payment for the hours during which you use instances, such as your utility bill
  • No setup cost, no installation, and no overhead at all
  • Shutting down or terminating instances when you no longer need them
  • Making such instances available on familiar operating systems

EC2 provides different types of instances to meet all your compute needs, such as general-purpose instances, microinstances, memory-optimized instances, storage-optimized instances, and others. They also have a Free Tier of microinstances for trial purposes.

Getting ready

The spark-ec2 script comes bundled with Spark and makes it easy to launch, manage, and shut down clusters on Amazon EC2.

Before you start, do the following things: log in to the Amazon AWS account via http://aws.amazon.com.

  1. Click on Security Credentials under your account name in the top-right corner.
  2. Click on Access Keys and Create New Access Key:
  1. Download the key file (let's save it in the /home/hduser/kp folder as spark-kp1.pem).
  2. Set permissions on the key file to 600.
  3. Set environment variables to reflect access key ID and secret access key (replace the sample values with your own values):
$ echo "export AWS_ACCESS_KEY_ID="AKIAOD7M2LOWATFXFKQ"" >> /home/hduser/.bashrc
$ echo "export 
       AWS_SECRET_ACCESS_KEY="+Xr4UroVYJxiLiY8DLT4DLT4D4sxc3ijZGMx1D3pfZ2q"" >> 
         /home/hduser/.bashrc
$ echo "export PATH=$PATH:/opt/infoobjects/spark/ec2" >> /home/hduser/.bashrc

How to do it…

  1. Spark comes bundled with scripts to launch the Spark cluster on Amazon EC2. Let's launch the cluster using the following command:
$ cd /home/hduser
$ spark-ec2 -k <key-pair> -i <key-file> -s <num-slaves> launch <cluster-name>
<key-pair> - name of EC2 keypair created in AWS
<key-file> the private key file you downloaded
<num-slaves> number of slave nodes to launch
<cluster-name> name of the cluster
  1. Launch the cluster with the example value:
$ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem --hadoop-major-
          version 2  -s 3 launch spark-cluster
  1. Sometimes, the default availability zones are not available; in that case, retry sending the request by specifying the specific availability zone you are requesting:
$ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem -z us-east-1b --
          hadoop-major-version 2  -s 3 launch spark-cluster
  1. If your application needs to retain data after the instance shuts down, attach EBS volume to it (for example, 10 GB space):
$ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem --hadoop-major-
          version 2 -ebs-vol-size 10 -s 3 launch spark-cluster
  1. If you use Amazon's spot instances, here is the way to do it:
$ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem -spot-price=0.15 
          --hadoop-major-version 2  -s 3 launch spark-cluster

Note

Spot instances allow you to name your own price for Amazon EC2's computing capacity. You simply bid for spare Amazon EC2 instances and run them whenever your bid exceeds the current spot price, which varies in real time and is based on supply and demand (source: www.amazon.com).

  1. After completing the preceding launch process, check the status of the cluster by going to the webUI URL that will be printed at the end:
  1. Check the status of the cluster:
  1. Now, to access the Spark cluster on EC2, connect to the master node using secure shell protocol (SSH):
$ spark-ec2 -k spark-kp1 -i /home/hduser/kp/spark-kp1.pem  login spark-cluster
  1. The following image illustrates the result you'll get:
  1. Check the directories in the master node and see what they do:

Directory

Description

ephemeral-hdfs

This is the Hadoop instance for which data is ephemeral and gets deleted when you stop or restart the machine.

persistent-hdfs

Each node has a very small amount of persistent storage (approximately 3 GB). If you use this instance, data will be retained in that space.

hadoop-native

This refers to the native libraries that support Hadoop, such as snappy compression libraries.

scala

This refers to the Scala installation.

shark

This refers to the Shark installation (Shark is no longer supported and is replaced by Spark SQL).

spark

This refers to the Spark installation.

spark-ec2

This refers to the files that support this cluster deployment.

tachyon

This refers to the Tachyon installation.

 

  1. Check the HDFS version in an ephemeral instance:
$ ephemeral-hdfs/bin/hadoop version
Hadoop 2.0.0-chd4.2.0
  1. Check the HDFS version in a persistent instance with the following command:
$ persistent-hdfs/bin/hadoop version
Hadoop 2.0.0-chd4.2.0
  1. Change the configuration level of the logs:
$ cd spark/conf
  1. The default log level information is too verbose, so let's change it to Error:
  •  Create the log4.properties file by renaming the template:
$ mv log4j.properties.template log4j.properties
  • Open log4j.properties in vi or your favorite editor:
$ vi log4j.properties
  • Change the second line from | log4j.rootCategory=INFO, console to | log4j.rootCategory=ERROR, console.
  1. Copy the configuration to all the slave nodes after the change:
$ spark-ec2/copydir spark/conf
  1. You should get something like this:
  1. Destroy the Spark cluster:
$ spark-ec2 destroy spark-cluster