Book Image

Practical Predictive Analytics

By : Ralph Winters
Book Image

Practical Predictive Analytics

By: Ralph Winters

Overview of this book

This is the go-to book for anyone interested in the steps needed to develop predictive analytics solutions with examples from the world of marketing, healthcare, and retail. We'll get started with a brief history of predictive analytics and learn about different roles and functions people play within a predictive analytics project. Then, we will learn about various ways of installing R along with their pros and cons, combined with a step-by-step installation of RStudio, and a description of the best practices for organizing your projects. On completing the installation, we will begin to acquire the skills necessary to input, clean, and prepare your data for modeling. We will learn the six specific steps needed to implement and successfully deploy a predictive model starting from asking the right questions through model development and ending with deploying your predictive model into production. We will learn why collaboration is important and how agile iterative modeling cycles can increase your chances of developing and deploying the best successful model. We will continue your journey in the cloud by extending your skill set by learning about Databricks and SparkR, which allow you to develop predictive models on vast gigabytes of data.
Table of Contents (19 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

Constructing a decision tree using Rpart


While OneR is very good at determining simple classification rules, it is not able to construct full decision trees. However, we can extract a sample from Spark and route it to any R decision tree algorithm, such as rpart.

First collect the sample

To illustrate this, let's first take a 50% sample of the stop and frisk dataframe. We also want to make sure that the amount of data we extract can be processed easily by base R, which has a memory limitation that is dependent upon the CPU size.

  • The code below will first extract a 50% sample from Spark and store it in a local R dataframe named dflocal.
  • Then it will run an str() command to verify the rowcount and the metadata:
dflocal = collect(sample(df, F,.50,123)) 
str(dflocal) 

The output indicates that there are 11,311 rows, which is roughly 50% of the 22,563 rows from the Stop and Frisk data.

Decision tree using Rpart

We will run our rpart algorithm as a regression tree. Recall that a regression tree is used...