Book Image

Apache Spark Machine Learning Blueprints

By : Alex Liu
Book Image

Apache Spark Machine Learning Blueprints

By: Alex Liu

Overview of this book

There's a reason why Apache Spark has become one of the most popular tools in Machine Learning – its ability to handle huge datasets at an impressive speed means you can be much more responsive to the data at your disposal. This book shows you Spark at its very best, demonstrating how to connect it with R and unlock maximum value not only from the tool but also from your data. Packed with a range of project "blueprints" that demonstrate some of the most interesting challenges that Spark can help you tackle, you'll find out how to use Spark notebooks and access, clean, and join different datasets before putting your knowledge into practice with some real-world projects, in which you will see how Spark Machine Learning can help you with everything from fraud detection to analyzing customer attrition. You'll also find out how to build a recommendation engine using Spark's parallel computing powers.
Table of Contents (18 chapters)
Apache Spark Machine Learning Blueprints
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

Data and feature preparation


Everyone who has worked with open data will agree that a huge amount of time is needed to clean datasets, with a lot of work to be completed to take care of data accuracy and data incompleteness.

Also, one main task is to merge all the datasets together, as we have separate datasets for crime, education, resource usage, request demand, and transportation from the open datasets. We also have datasets from some separate sources, including census.

In the Feature extraction section of Chapter 2, Data Preparation for Spark ML, we reviewed a few methods for feature extraction and discussed their implementation on Apache Spark. All the techniques discussed there can be applied to our data here.

Besides data merging, we will also need to spend a lot of time on feature development, as we need features to develop our models to obtain insights for this project.

Therefore, for this project, we actually need to conduct data merging, and then feature development and selection...