Book Image

Machine Learning with Spark - Second Edition

By : Rajdeep Dua, Manpreet Singh Ghotra
Book Image

Machine Learning with Spark - Second Edition

By: Rajdeep Dua, Manpreet Singh Ghotra

Overview of this book

This book will teach you about popular machine learning algorithms and their implementation. You will learn how various machine learning concepts are implemented in the context of Spark ML. You will start by installing Spark in a single and multinode cluster. Next you'll see how to execute Scala and Python based programs for Spark ML. Then we will take a few datasets and go deeper into clustering, classification, and regression. Toward the end, we will also cover text processing using Spark ML. Once you have learned the concepts, they can be applied to implement algorithms in either green-field implementations or to migrate existing systems to this new platform. You can migrate from Mahout or Scikit to use Spark ML. By the end of this book, you will acquire the skills to leverage Spark's features to create your own scalable machine learning applications and power a modern data-driven business.
Table of Contents (13 chapters)

Spark Cluster on Google Compute Engine - DataProc

Cloud Dataproc is a Spark and Hadoop service running on Google Compute Engine. It is a managed service. Cloud Dataproc automation helps create clusters quickly, manage them easily, and save money by turning clusters off when you don't need them.

In this section, we will learn how to create a Spark cluster using DataProc and running a Sample app on it.

Make sure that you have created a Google Compute Engine account and installed Google Cloud SDK (https://cloud.google.com/sdk/gcloud/).

Hadoop and Spark Versions

DataProc supports the following Hadoop and Spark versions. Note that this will change with time as new versions come out:

For more information, go to http://cloud.google.com/dataproc-versions.

In the following steps, we will use Google Cloud Console (the user interface used to create a Spark Cluster and submit a job).

Creating a Cluster

You can create a Spark cluster by going to the Cloud Platform Console. Select the project, and then click Continue to open the Clusters page. You would see the Cloud Dataproc clusters that belong to your project, if you have created any.

Click on the Create a cluster button to open the Create a Cloud Data pros cluster page. Refer to the following screenshot:

Once you click on Create a cluster, a detailed form, which is as shown in the following screenshot, shows up:

The previous screenshot shows the Create a Cloud Dataproc cluster page with the default fields automatically filled in for a new cluster-1 cluster. Take a look at the following screenshot:

You can expand the workers, bucket, network, version, initialization, and access options panel to specify one or more worker nodes, a staging bucket, network, initialization, the Cloud Dataproc image version, actions, and project-level access for your cluster. Providing these values is optional.

The default cluster is created with no worker nodes, an auto-created staging bucket, and a default network It also has the latest released Cloud Dataproc image version. You can change these default settings:

Once you have configured all fields on the page, click on the Create button to create the cluster. The cluster name created appears on the Clusters page. The status is updated to Running once the spark cluster is created.

Click on the cluster name created earlier to open the cluster details page. It also has a Overview tab and the CPU utilization graph selected.

You can examine jobs, instances, and so on for the cluster from the other tabs.

Submitting a Job

To submit a job from the Cloud Platform Console to the cluster, go to the Cloud Platform UI. Select the appropriate project and then click on Continue. The first time you submit a job, the following dialog appears:

Click on Submit a job:

To submit a Spark sample job, fill the fields on the Submit a job page, as follows:

  1. Select a cluster name from the cluster list on the screen.
  2. Set Job type toSpark.
  3. Add file:///usr/lib/spark/lib/spark-examples.jar to Jar files. Here, file:/// denotes a Hadoop LocalFileSystem scheme; Cloud Dataproc installs /usr/lib/spark/lib/spark-examples.jar on the cluster's master node when it creates the cluster. Alternatively, you can specify a Cloud Storage path (gs://my-bucket/my-jarfile.jar) or an HDFS path (hdfs://examples/myexample.jar) to one of the custom jars.
  4. Set Main class or jar to org.apache.spark.examples.SparkPi.
  5. Set Arguments to the single argument 1000.

Click on Submit to start the job.

Once the job starts, it is added to the Jobs list. Refer to the following screenshot:

Once the job is complete, its status changes:

Take a look at the job output as listed here.

Execute the command from the terminal with the appropriate Job ID.

In our case, the Job ID was 1ed4d07f-55fc-45fe-a565-290dcd1978f7 and project-ID was rd-spark-1; hence, the command looks like this:

  $ gcloud beta dataproc --project=rd-spark-1 jobs wait 1ed4d07f-
55fc-45fe-a565-290dcd1978f7

The (abridged) output is shown here:

Waiting for job output...
16/01/28 10:04:29 INFO akka.event.slf4j.Slf4jLogger: Slf4jLogger
started

16/01/28 10:04:29 INFO Remoting: Starting remoting
...
Submitted application application_1453975062220_0001
Pi is roughly 3.14157732

You can also SSH into the Spark Instance and run spark-shell in the interactive mode.