Book Image

Apache Spark Machine Learning Blueprints

By : Alex Liu
Book Image

Apache Spark Machine Learning Blueprints

By: Alex Liu

Overview of this book

There's a reason why Apache Spark has become one of the most popular tools in Machine Learning – its ability to handle huge datasets at an impressive speed means you can be much more responsive to the data at your disposal. This book shows you Spark at its very best, demonstrating how to connect it with R and unlock maximum value not only from the tool but also from your data. Packed with a range of project "blueprints" that demonstrate some of the most interesting challenges that Spark can help you tackle, you'll find out how to use Spark notebooks and access, clean, and join different datasets before putting your knowledge into practice with some real-world projects, in which you will see how Spark Machine Learning can help you with everything from fraud detection to analyzing customer attrition. You'll also find out how to build a recommendation engine using Spark's parallel computing powers.
Table of Contents (18 chapters)
Apache Spark Machine Learning Blueprints
Credits
About the Author
About the Reviewer
www.PacktPub.com
Preface
Index

ML workflow examples


To further understand machine learning workflows, let us review some examples here.

In the later chapters of this book, we will work on risk modelling, fraud detection, customer view, churn prediction, and recommendation. For many of these types of projects, the goal is often to identify causes of certain problems, or to build a causal model. Below is one example of a workflow to develop a causal model.

  1. Check data structure to ensure a good understanding of the data:

    • Is the data a cross sectional data? Is implicit timing incorporated?

    • Are categorical variables used?

  2. Check missing values:

    • Don't know or forget as an answer may be recoded as neutral or treated as a special category

    • Some variables may have a lot of missing values

    • To recode some variables as needed

  3. Conduct some descriptive studies to begin telling stories:

    • Use comparing means and crosstabulations

    • Check variability of some key variables (standard deviation and variance)

  4. Select groups of ind variables (exogenous variables):

    • As candidates of causes

  5. Basic descriptive statistics:

    • Mean, standard deviaton, and frequencies for all variables

  6. Measurement work:

    • Study dimensions of some measurements (efa exploratory factor analysis may be useful here)

    • May form measurement models

  7. Local models:

    • Identify sections out from the whole picture to explore relationship

    • Use crosstabulations

    • Graphical plots

    • Use logistic regression

    • Use linear regression

  8. Conduct some partial correlation analysis to help model specification.

  9. Propose structural equation models by using the results of (8):

    • Identify main structures and sub structures

    • Connect measurements with structure models

  10. Initial fits:

    • Use spss to create data sets for lisrel or mplus

    • Programming in lisrel or mplus

  11. Model modification:

    • Use SEM results (mainly model fit indices) to guide

    • Re-analyze partial correlations

  12. Diagnostics:

    • Distribution

    • Residuals

    • Curves

  13. Final model estimation may be reached here:

    • If not repeat step 13 and 14

  14. Explaining the model (causal effects identified and quantified).

    Note

    Also refer to http://www.researchmethods.org/step-by-step1.pdf, Spark Pipelines

The Apache Spark team has recognized the importance of machine learning workflows and they have developed Spark Pipelines to enable good handling of them.

Spark ML represents a ML workflow as a pipeline, which consists of a sequence of PipelineStages to be run in a specific order.

PipelineStages include Spark Transformers, Spark Estimators and Spark Evaluators.

ML workflows can be very complicated, so that creating and tuning them is very time consuming. The Spark ML Pipeline was created to make the construction and tuning of ML workflows easy, and especially to represent the following main stages:

  1. Loading data

  2. Extracting features

  3. Estimating models

  4. Evaluating models

  5. Explaining models

With regards to the above tasks, Spark Transformers can be used to extract features. Spark Estimators can be used to train and estimate models, and Spark Evaluators can be used to evaluate models.

Technically, in Spark, a Pipeline is specified as a sequence of stages, and each stage is either a Transformer, an Estimator, or an Evaluator. These stages are run in order, and the input dataset is modified as it passes through each stage. For Transformer stages, the transform() method is called on the dataset. For estimator stages, the fit() method is called to produce a Transformer (which becomes part of the PipelineModel, or fitted Pipeline), and that Transformer's transform() method is called on the dataset.

The specifications given above are all for linear Pipelines. It is possible to create non-linear Pipelines as long as the data flow graph forms a Directed Acyclic Graph (DAG).

Note

For more info on Spark pipeline, please visit:

http://spark.apache.org/docs/latest/ml-guide.html#pipeline