Book Image

Apache Spark Machine Learning Blueprints

By : Alex Liu
Book Image

Apache Spark Machine Learning Blueprints

By: Alex Liu

Overview of this book

There's a reason why Apache Spark has become one of the most popular tools in Machine Learning – its ability to handle huge datasets at an impressive speed means you can be much more responsive to the data at your disposal. This book shows you Spark at its very best, demonstrating how to connect it with R and unlock maximum value not only from the tool but also from your data. Packed with a range of project "blueprints" that demonstrate some of the most interesting challenges that Spark can help you tackle, you'll find out how to use Spark notebooks and access, clean, and join different datasets before putting your knowledge into practice with some real-world projects, in which you will see how Spark Machine Learning can help you with everything from fraud detection to analyzing customer attrition. You'll also find out how to build a recommendation engine using Spark's parallel computing powers.
Table of Contents (18 chapters)
Apache Spark Machine Learning Blueprints
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Results explanation


After we have passed our model evaluation stage and decided to select the estimated and evaluated model as our final model, our next task is to interpret results to the university leaders and technicians.

In terms of explaining the machine learning results, the university is particularly interested in, firstly, understanding how their designed interventions affect student attrition, and, secondly, among the common reasons of finances, academic performance, social/emotional encouragement, and personal adjustment, which has the biggest impact.

We will work on results explanation with our focus on big influencing variables in the following sections.

Calculating the impact of interventions

The following summarizes some of the result samples briefly, for which we can use some functions from randomForest and decision tree to produce.

With Spark 1.5, you can use the following code to obtain a vector of feature importance:

val importances: Vector = model.featureImportances

With the...