Book Image

Apache Spark Machine Learning Blueprints

By : Alex Liu
Book Image

Apache Spark Machine Learning Blueprints

By: Alex Liu

Overview of this book

There's a reason why Apache Spark has become one of the most popular tools in Machine Learning – its ability to handle huge datasets at an impressive speed means you can be much more responsive to the data at your disposal. This book shows you Spark at its very best, demonstrating how to connect it with R and unlock maximum value not only from the tool but also from your data. Packed with a range of project "blueprints" that demonstrate some of the most interesting challenges that Spark can help you tackle, you'll find out how to use Spark notebooks and access, clean, and join different datasets before putting your knowledge into practice with some real-world projects, in which you will see how Spark Machine Learning can help you with everything from fraud detection to analyzing customer attrition. You'll also find out how to build a recommendation engine using Spark's parallel computing powers.
Table of Contents (18 chapters)
Apache Spark Machine Learning Blueprints
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Deployment


As demonstrated in the previous chapters, turning estimated models into scores is not very challenging, and could be done under non-Spark platforms. However, Apache Spark makes things easy and fast as demonstrated.

With the notebook approach adopted in this chapter, we will fully achieve the advantage to quickly produce new scores when data and customer requirements get changed.

Users will find some similarity to the deployment work in the last chapter—the deployment of scoring for fraud detection.

Scoring

From coefficients of our predictive models, we can derive a risk score for possible default, which takes some work. But it gives the client the flexibility of changing it whenever needed.

With logistic regression, the process of producing scores is relatively easy—it uses the following formulae for logistic regression:

Specifically, Prob(Yi=1) = exp(BXi)/(1+exp(BXi)) produces the default probability, with Y=1 as the default, and X is a sum of all the features. In R, exp(coef(logit_model...