Book Image

Practical Predictive Analytics

By : Ralph Winters
Book Image

Practical Predictive Analytics

By: Ralph Winters

Overview of this book

This is the go-to book for anyone interested in the steps needed to develop predictive analytics solutions with examples from the world of marketing, healthcare, and retail. We'll get started with a brief history of predictive analytics and learn about different roles and functions people play within a predictive analytics project. Then, we will learn about various ways of installing R along with their pros and cons, combined with a step-by-step installation of RStudio, and a description of the best practices for organizing your projects. On completing the installation, we will begin to acquire the skills necessary to input, clean, and prepare your data for modeling. We will learn the six specific steps needed to implement and successfully deploy a predictive model starting from asking the right questions through model development and ending with deploying your predictive model into production. We will learn why collaboration is important and how agile iterative modeling cycles can increase your chances of developing and deploying the best successful model. We will continue your journey in the cloud by extending your skill set by learning about Databricks and SparkR, which allow you to develop predictive models on vast gigabytes of data.
Table of Contents (19 chapters)
Title Page
About the Author
About the Reviewers
Customer Feedback

Exporting data from Spark back into R

It will often be the case that some of the analysis you wish to perform will not be available within SparkR and you will need to extract some of the data from Spark objects, and return them to base R.

For example, we were able to run correlation and covariance functions earlier directly on a Spark dataframe, by specifying specific pairs of variables. However, we did not generate correlation matrices for the entire dataframe for a couple of reasons:

  • The capability to do this may not be built into the version of Spark that you are currently running

  • Even if it was available, these kinds of calculation could be very computationally expensive to perform

One strategy you may want to use is to use Spark functions to explore basic characteristics of the data, and/or utilize specialized packages written for Spark (such as MLlib) to perform this.

For other cases, in which you want to perform more in-depth analysis, simply extract a sample from the Spark dataframe,...