Book Image

Apache Spark Machine Learning Blueprints

By : Alex Liu
Book Image

Apache Spark Machine Learning Blueprints

By: Alex Liu

Overview of this book

There's a reason why Apache Spark has become one of the most popular tools in Machine Learning – its ability to handle huge datasets at an impressive speed means you can be much more responsive to the data at your disposal. This book shows you Spark at its very best, demonstrating how to connect it with R and unlock maximum value not only from the tool but also from your data. Packed with a range of project "blueprints" that demonstrate some of the most interesting challenges that Spark can help you tackle, you'll find out how to use Spark notebooks and access, clean, and join different datasets before putting your knowledge into practice with some real-world projects, in which you will see how Spark Machine Learning can help you with everything from fraud detection to analyzing customer attrition. You'll also find out how to build a recommendation engine using Spark's parallel computing powers.
Table of Contents (18 chapters)
Apache Spark Machine Learning Blueprints
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Summary


The work presented in this chapter is a further extension of Chapter 10, Learning Telco Data on Spark, as well as Chapter 9, City Analytics on Spark. It is a very special extension of Chapter 9, City Analytics on Spark, as both chapters are using open datasets. It is also an extension of Chapter 10, Learning Telco Data on Spark, as both chapters take a dynamic approach so that readers can take advantage of all the learned techniques to achieve better machine learning results and also to develop the best analytical solutions. Therefore, this chapter may be used as a review chapter, as well as a special chapter for you to synthesize all the knowledge learned.

In this chapter, with a real-life project of learning from open data, we have repeated the same step-by-step RM4Es process as used in the previous chapters, from which we processed open data on Apache Spark and then selected models (equations). For each model, we estimated their coefficients and then evaluated these models against...