#### Overview of this book

This is the go-to book for anyone interested in the steps needed to develop predictive analytics solutions with examples from the world of marketing, healthcare, and retail. We'll get started with a brief history of predictive analytics and learn about different roles and functions people play within a predictive analytics project. Then, we will learn about various ways of installing R along with their pros and cons, combined with a step-by-step installation of RStudio, and a description of the best practices for organizing your projects. On completing the installation, we will begin to acquire the skills necessary to input, clean, and prepare your data for modeling. We will learn the six specific steps needed to implement and successfully deploy a predictive model starting from asking the right questions through model development and ending with deploying your predictive model into production. We will learn why collaboration is important and how agile iterative modeling cycles can increase your chances of developing and deploying the best successful model. We will continue your journey in the cloud by extending your skill set by learning about Databricks and SparkR, which allow you to develop predictive models on vast gigabytes of data.
Title Page
Credits
www.PacktPub.com
Customer Feedback
Preface
Free Chapter
Getting Started with Predictive Analytics
The Modeling Process
Inputting and Exploring Data
Introduction to Regression Algorithms
Introduction to Decision Trees, Clustering, and SVM
Using Survival Analysis to Predict and Analyze Customer Churn
Introduction to Spark Using R
Exploring Large Datasets Using Spark
Spark Machine Learning - Regression and Cluster Models
Spark Models – Rule-Based Learning

## Another OneR example

This example uses the much larger diabetes dataset. Since most of the variables in this dataset are numeric, `OneR` can bin all of them:

1. First, read the Spark diabetes table using SQL, which has already been registered in a previous chapter.
2. Collect a 15% random sample of the data and assign it to an R (not Spark!) dataframe named "local".
3. Bin all of the available variables based upon their ability to predict the outcome and assign it to an R dataframe named "data":
```        library(OneR)
df = sql("SELECT outcome, age, mass, triceps, pregnant,
glucose, pressure, insulin, pedigree
FROM global_temp.df_view")

local = collect(sample(df, F,.15))

data <- optbin(local,outcome~.)
summary(data) ```
1. Run the `OneR` model using all of the variables to predict the outcome. Recall that the outcome is an indication of whether or not diabetes is present:
```        model <- OneR(data, outcome~., verbose = TRUE)
summary(model)

...```