Book Image

Mastering Machine Learning with Spark 2.x

By : Michal Malohlava, Alex Tellez, Max Pumperla
Book Image

Mastering Machine Learning with Spark 2.x

By: Michal Malohlava, Alex Tellez, Max Pumperla

Overview of this book

The purpose of machine learning is to build systems that learn from data. Being able to understand trends and patterns in complex data is critical to success; it is one of the key strategies to unlock growth in the challenging contemporary marketplace today. With the meteoric rise of machine learning, developers are now keen on finding out how can they make their Spark applications smarter. This book gives you access to transform data into actionable knowledge. The book commences by defining machine learning primitives by the MLlib and H2O libraries. You will learn how to use Binary classification to detect the Higgs Boson particle in the huge amount of data produced by CERN particle collider and classify daily health activities using ensemble Methods for Multi-Class Classification. Next, you will solve a typical regression problem involving flight delay predictions and write sophisticated Spark pipelines. You will analyze Twitter data with help of the doc2vec algorithm and K-means clustering. Finally, you will build different pattern mining models using MLlib, perform complex manipulation of DataFrames using Spark and Spark SQL, and deploy your app in a Spark streaming environment.
Table of Contents (9 chapters)
3
Ensemble Methods for Multi-Class Classification

Supervised learning task

Like in the previous chapter, we need to prepare the training and validation data. In this case, we'll reuse the Spark API to split the data:

val trainValidSplits = inputData.randomSplit(Array(0.8, 0.2))
val (trainData, validData) = (trainValidSplits(0), trainValidSplits(1))

Now, let's perform a grid search using a simple decision tree and a few hyperparameters:

val gridSearch =
for (
hpImpurity <- Array("entropy", "gini");
hpDepth <- Array(5, 20);
hpBins <- Array(10, 50))
yield {
println(s"Building model with: impurity=${hpImpurity}, depth=${hpDepth}, bins=${hpBins}")
val model = new DecisionTreeClassifier()
.setFeaturesCol("reviewVector")
.setLabelCol("label")
.setImpurity(hpImpurity)
.setMaxDepth(hpDepth)
.setMaxBins(hpBins)
.fit...