Book Image

Apache Spark 2.x Machine Learning Cookbook

By : Mohammed Guller, Siamak Amirghodsi, Shuen Mei, Meenakshi Rajendran, Broderick Hall
Book Image

Apache Spark 2.x Machine Learning Cookbook

By: Mohammed Guller, Siamak Amirghodsi, Shuen Mei, Meenakshi Rajendran, Broderick Hall

Overview of this book

Machine learning aims to extract knowledge from data, relying on fundamental concepts in computer science, statistics, probability, and optimization. Learning about algorithms enables a wide range of applications, from everyday tasks such as product recommendations and spam filtering to cutting edge applications such as self-driving cars and personalized medicine. You will gain hands-on experience of applying these principles using Apache Spark, a resilient cluster computing system well suited for large-scale machine learning tasks. This book begins with a quick overview of setting up the necessary IDEs to facilitate the execution of code examples that will be covered in various chapters. It also highlights some key issues developers face while working with machine learning algorithms on the Spark platform. We progress by uncovering the various Spark APIs and the implementation of ML algorithms with developing classification systems, recommendation engines, text analytics, clustering, and learning systems. Toward the final chapters, we’ll focus on building high-end applications and explain various unsupervised methodologies and challenges to tackle when implementing with big data ML systems.
Table of Contents (20 chapters)
Title Page
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Configuring IntelliJ to work with Spark and run Spark ML sample codes


We need to run some to ensure that the project settings are correct before being able to run the samples that are provided by Spark or any of the listed this book.

Getting ready

We need to be particularly careful when configuring the project structure and global libraries. After we set everything up, we proceed to run the sample ML code provided by the Spark team to verify the setup. Sample code can be found under the Spark directory or can be obtained by downloading the Spark source code with samples.

How to do it...

The following are the steps for configuring IntelliJ to work with Spark MLlib and for running the sample ML code provided by Spark in the examples directory. The examples directory can be found in your home directory for Spark. Use the Scala samples to proceed:

  1. Click on the Project Structure... option, as shown in the following screenshot, to configure project settings:
  1. Verify the settings:
  1. Configure Global Libraries. Select Scala SDK as your global library:
  1. Select the JARs for the new Scala SDK and let the download complete:
  1. Select the project name:
  1. Verify the settings and additional libraries:
  1. Add dependency JARs. Select modules under the Project Settings in the left-hand pane and click on dependencies to choose the required JARs, as shown in the following screenshot:
  1. Select the JAR files provided by Spark. Choose Spark's default installation directory and then select the lib directory:
  1. We then select the JAR files for examples that are provided for Spark out of the box.
  1. Add required JARs by verifying that you selected and imported all the JARs listed under External Libraries in the the left-hand pane:

The next step is to download and install the Flume and Kafka JARs. For the purposes of this book, we have used the Maven repo:

  1. Download and install the Kafka assembly:
  1. Download and install the Flume assembly:
  1. After the download is complete, move the downloaded JAR files to the lib directory of Spark. We used the C drive when we installed Spark:
  1. Open your IDE and verify that all the JARs under the External Libraries folder on the left, as shown in the following screenshot, are present in your setup:
  1. Build the example projects in Spark to verify the setup:
  1. Verify that the build was successful:

There's more...

Prior to Spark 2.0, we needed library from called Guava for facilitating I/O and for providing a set of rich methods of defining tables and then letting Spark broadcast them across the cluster. Due to dependency issues that were hard to work around, Spark 2.0 no longer uses the Guava library. Make sure you use the Guava library if you are using Spark versions prior to 2.0 (required in version 1.5.2). The library can be accessed at the following URL:

https://github.com/google/guava/wiki

You may want to use Guava version 15.0, which can be found here:

https://mvnrepository.com/artifact/com.google.guava/guava/15.0

If you are using installation instructions from previous blogs, make sure to exclude the Guava library from the installation set.

See also

If there are other third-party libraries or JARs required for the completion of the Spark installation, you can find those in the following repository:

https://repo1.maven.org/maven2/org/apache/spark/