Book Image

Machine Learning with Spark - Second Edition

By : Rajdeep Dua, Manpreet Singh Ghotra
Book Image

Machine Learning with Spark - Second Edition

By: Rajdeep Dua, Manpreet Singh Ghotra

Overview of this book

This book will teach you about popular machine learning algorithms and their implementation. You will learn how various machine learning concepts are implemented in the context of Spark ML. You will start by installing Spark in a single and multinode cluster. Next you'll see how to execute Scala and Python based programs for Spark ML. Then we will take a few datasets and go deeper into clustering, classification, and regression. Toward the end, we will also cover text processing using Spark ML. Once you have learned the concepts, they can be applied to implement algorithms in either green-field implementations or to migrate existing systems to this new platform. You can migrate from Mahout or Scikit to use Spark ML. By the end of this book, you will acquire the skills to leverage Spark's features to create your own scalable machine learning applications and power a modern data-driven business.
Table of Contents (13 chapters)

Installing and setting up Spark locally

Spark can be run using the built-in standalone cluster scheduler in the local mode. This means that all the Spark processes are run within the same JVM-effectively, a single, multithreaded instance of Spark. The local mode is very used for prototyping, development, debugging, and testing. However, this mode can also be useful in real-world scenarios to perform parallel computation across multiple cores on a single computer.

As Spark's local mode is fully compatible with the cluster mode; programs written and tested locally can be run on a cluster with just a few additional steps.

The first step in setting up Spark locally is to download the latest version http://spark.apache.org/downloads.html, which contains links to download various versions of Spark as well as to obtain the latest source code via GitHub.

The documents/docs available at http://spark.apache.org/docs/latest/ are a comprehensive resource to learn more about Spark. We highly recommend that you explore it!

Spark needs to be built against a specific version of Hadoop in order to access Hadoop Distributed File System (HDFS) as well as standard and custom Hadoop input sources Cloudera's Hadoop Distribution, MapR's Hadoop distribution, and Hadoop 2 (YARN). Unless you wish to build Spark against a specific Hadoop version, we recommend that you download the prebuilt Hadoop 2.7 package from an Apache mirror from http://d3kbcqa49mib13.cloudfront.net/spark-2.0.2-bin-hadoop2.7.tgz.

Spark requires the Scala programming language (version 2.10.x or 2.11.x at the time of writing this book) in order to run. Fortunately, the prebuilt binary package comes with the Scala runtime packages included, so you don't need to install Scala separately in order to get started. However, you will need to have a Java Runtime Environment (JRE) or Java Development Kit (JDK).

Refer to the software and hardware list in this book's code bundle for installation instructions. R 3.1+ is needed.

Once you have downloaded the Spark binary package, unpack the contents of the package and change it to the newly created directory by running the following commands:

  $ tar xfvz spark-2.0.0-bin-hadoop2.7.tgz
$ cd spark-2.0.0-bin-hadoop2.7

Spark places user scripts to run Spark in the bin directory. You can test whether everything is working correctly by running one of the example programs included in Spark. Run the following command:

  $ bin/run-example SparkPi 100

This will run the example in Spark's local standalone mode. In this mode, all the Spark processes are run within the same JVM, and Spark uses multiple threads for parallel processing. By default, the preceding example uses a number of threads equal to the number of cores available on your system. Once the program is executed, you should see something similar to the following lines toward the end of the output:

...
16/11/24 14:41:58 INFO Executor: Finished task 99.0 in stage 0.0
(TID 99). 872 bytes result sent to driver

16/11/24 14:41:58 INFO TaskSetManager: Finished task 99.0 in stage
0.0 (TID 99) in 59 ms on localhost (100/100)

16/11/24 14:41:58 INFO DAGScheduler: ResultStage 0 (reduce at
SparkPi.scala:38) finished in 1.988 s

16/11/24 14:41:58 INFO TaskSchedulerImpl: Removed TaskSet 0.0,
whose tasks have all completed, from pool

16/11/24 14:41:58 INFO DAGScheduler: Job 0 finished: reduce at
SparkPi.scala:38, took 2.235920 s

Pi is roughly 3.1409527140952713

The preceding command calls class org.apache.spark.examples.SparkPi class.

This class takes parameter in the local[N] form, where N is the number of threads to use. For example, to use only two threads, run the following command instead:N is the number of threads to use. Giving local[*] will use all of the cores on the local machine--that is a common usage.

To use only two threads, run the following command instead:

  $ ./bin/spark-submit  --class org.apache.spark.examples.SparkPi 
--master local[2] ./examples/jars/spark-examples_2.11-2.0.0.jar 100