Book Image

Apache Spark Machine Learning Blueprints

By : Alex Liu
Book Image

Apache Spark Machine Learning Blueprints

By: Alex Liu

Overview of this book

There's a reason why Apache Spark has become one of the most popular tools in Machine Learning – its ability to handle huge datasets at an impressive speed means you can be much more responsive to the data at your disposal. This book shows you Spark at its very best, demonstrating how to connect it with R and unlock maximum value not only from the tool but also from your data. Packed with a range of project "blueprints" that demonstrate some of the most interesting challenges that Spark can help you tackle, you'll find out how to use Spark notebooks and access, clean, and join different datasets before putting your knowledge into practice with some real-world projects, in which you will see how Spark Machine Learning can help you with everything from fraud detection to analyzing customer attrition. You'll also find out how to build a recommendation engine using Spark's parallel computing powers.
Table of Contents (18 chapters)
Apache Spark Machine Learning Blueprints
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Methods for fraud detection


In the previous section, we described our business use case and also prepared our Spark computing platform as well as our datasets. In this section, we need to select our analytical methods or predictive models (equations) for this fraud detection project, which is to complete a task of mapping our business use case to machine learning methods.

For fraud detection, both supervised machine learning and unsupervised machine learning are commonly used. However, for this case, we will perform a supervised machine learning because we do have good data for our target variable of fraud and also because our practical goal is to reduce frauds while continuing business transactions.

To model and predict frauds, there are many suitable models, including logistic regression and the decision tree. Selecting one among them can sometimes become extremely difficult as it depends on the data to be used. One solution is to first run all the models and then select the best ones using...