Book Image

Apache Spark Machine Learning Blueprints

By : Alex Liu
Book Image

Apache Spark Machine Learning Blueprints

By: Alex Liu

Overview of this book

There's a reason why Apache Spark has become one of the most popular tools in Machine Learning – its ability to handle huge datasets at an impressive speed means you can be much more responsive to the data at your disposal. This book shows you Spark at its very best, demonstrating how to connect it with R and unlock maximum value not only from the tool but also from your data. Packed with a range of project "blueprints" that demonstrate some of the most interesting challenges that Spark can help you tackle, you'll find out how to use Spark notebooks and access, clean, and join different datasets before putting your knowledge into practice with some real-world projects, in which you will see how Spark Machine Learning can help you with everything from fraud detection to analyzing customer attrition. You'll also find out how to build a recommendation engine using Spark's parallel computing powers.
Table of Contents (18 chapters)
Apache Spark Machine Learning Blueprints
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Spark notebooks


In this section, we first discuss about notebook approaches for machine learning. Then we provide a full introduction to R Markdown as a mature notebook example, and then introduce Spark's R notebook to complete this section.

After this section, readers will master these notebook approaches as well as some related concepts, and be ready to use them for managing and programming machine learning projects.

Notebook approach for ML

Notebook became a favored machine learning approach, not only for its dynamics, but also for reproducibility.

Most notebook interfaces are comprised of a series of code blocks, called cells. The development process is a discovery type, for which a developer can develop and run codes in one cell, and then can continue to write code in a subsequent cell depending on the results from the first cell. Particularly when analyzing large datasets, this interactive type of approach allows machine learning professionals to quickly discover patterns or insights into data. Therefore, notebook-style development processes provide some exploratory and interactive ways to write code and immediately examine results.

Notebook allows users to seamlessly mix code, outputs, and markdown comments all in the same document. With everything in one document, it makes it easier for machine learning professionals to reproduce their work at a later stage.

This notebook approach was adopted to ensure reproducibility, to align analysis with computation, and to align analysis with presentation, so to end the copy and paste way of research management.

Specifically, using notebook allows users to:

  • Analyze iteratively

  • Report transparently

  • Collaborate seamlessly

  • Compute with clarity

  • Assess reasoning, not only results

  • The note book approach also provides a unified way to integrate many analytical tools for machine learning practice.

    Note

    For more about adopting an approach for reproducibility, please visit http://chance.amstat.org/2014/09/reproducible-paradigm/ R Markdown

R Markdown is a very popular tool helping data scientists and machine learning professionals to generate dynamic reports, and also making their analytical workflows reproducible. R Markdown is one of the pioneer notebook tools.

According to RStudio

"R Markdown is a format that enables easy authoring of reproducible web reports from R. It combines the core syntax of Markdown (an easy-to-write plain text format for web content) with embedded R code chunks that are run so their output can be included in the final document".

Therefore, we can use R and the Markdown package plus some other dependent packages like knitr, to author reproducible analytical reports. However, utilizing RStudio and the Markdown package together makes things easy for data scientists.

Using the Markdown is very easy for R users. As an example, let us create a report in the following three simple steps:

Step 1: Getting the software ready

  1. Download R studio at : http://rstudio.org/

  2. Set options for R studio: Tools > Options > Click on Sweave and choose Knitr at Weave Rnw files using Knitr.

Step 2: Installing the Knitr package

  1. To install a package in RStudio, you use Tools > Install Packages and then select a CRAN mirror and package to install. Another way to install packages is to use the function install.packages().

  2. To install the knitr package from the Carnegi Mellon Statlib CRAN mirror, we can use: install.packages("knitr", repos = "http://lib.stat.cmu.edu/R/CRAN/")

Step 3: Creating a simple report

  1. Create a blank R Markdown file: File > New > R Markdown. You will open a new .Rmd file.

  2. When you create the blank file, you can see an already-written module. One simple way to go is to replace the corresponding parts with your own information.

  3. After all your information is entered, click Knit HTML.

  4. Now you will see that you have generated an .html file.

Spark notebooks

There are a few notebooks compatible with Apache Spark computing. Among them, Databricks is one of the best, as it was developed by the original Spark team. The Databricks Notebook is similar to the R Markdown, but is seamlessly integrated with Apache Spark.

Besides SQL, Python, and Scala, now the Databricks notebook is also available for R, and Spark 1.4 includes the SparkR package by default. That is, from now on, data scientists and machine learning professionals can effortlessly benefit from the power of Apache Spark in their R environment, by writing and running R notebooks on top of Spark.

In addition to SparkR, any R package can be easily installed into the Databricks R notebook by using install.packages(). So, with the Databricks R notebook, data scientists and machine learning professionals can have the power of R Markdown on top of Spark. By using SparkR, data scientists and machine learning professionals can access and manipulate very large data sets (e.g. terabytes of data) from distributed storage (e.g. Amazon S3) or data warehouses (e.g. Hive). Data scientists and machine learning professionals can even collect a SparkR DataFrame to local data frames.

Visualization is a critical part of any machine learning project. In R Notebooks, data scientists and machine learning professionals can use any R visualization library, including R's base plotting, ggplot, or Lattice. Like R Markdown, plots are displayed inline in the R notebook. Users can apply Databricks' built-in display() function on any R DataFrame or SparkR DataFrame. The result will appear as a table in the notebook, which can then be plotted with one click. Similar to other Databricks notebooks like the Python notebook, data scientists can also use displayHTML() function in R notebooks to produce any HTML and Javascript visualization.

Databricks' end-to-end solution also makes building a machine learning pipeline easy from ingest to production, which applies to R Notebooks as well: Data scientists can schedule their R notebooks to run as jobs on Spark clusters. The results of each job, including visualizations, are immediately available to browse, making it much simpler and faster to turn the work into production.

To sum up, R Notebooks in Databricks let R users take advantage of the power of Spark through simple Spark cluster management, rich one-click visualizations, and instant deployment to production jobs. It also offers a 30-day free trial.