Book Image

Practical Predictive Analytics

By : Ralph Winters
Book Image

Practical Predictive Analytics

By: Ralph Winters

Overview of this book

This is the go-to book for anyone interested in the steps needed to develop predictive analytics solutions with examples from the world of marketing, healthcare, and retail. We'll get started with a brief history of predictive analytics and learn about different roles and functions people play within a predictive analytics project. Then, we will learn about various ways of installing R along with their pros and cons, combined with a step-by-step installation of RStudio, and a description of the best practices for organizing your projects. On completing the installation, we will begin to acquire the skills necessary to input, clean, and prepare your data for modeling. We will learn the six specific steps needed to implement and successfully deploy a predictive model starting from asking the right questions through model development and ending with deploying your predictive model into production. We will learn why collaboration is important and how agile iterative modeling cycles can increase your chances of developing and deploying the best successful model. We will continue your journey in the cloud by extending your skill set by learning about Databricks and SparkR, which allow you to develop predictive models on vast gigabytes of data.
Table of Contents (19 chapters)
Title Page
About the Author
About the Reviewers
Customer Feedback

Some useful Spark functions to explore your data

Count and groupby

We can also use the Count and groupby functions to aggregate individual variables.

Here is an example of using this to tally the number of observations by outcome. Since the result is another dataframe, we can use the head function to write the results to the console.


You might have to alter the number of rows returned by head if you change the query. It is always a good idea to filter results using a function such as head, to make sure that you are not printing hundreds of rows (or more).

However, you also need to ensure that you do not cut off all of your output. If you are unsure as to the number of rows, first assign the result to a dataframe and then check the number of rows (with nrow) first:

This code line count the number of rows by outcome. I know that there should be only 2 outcomes, but I place the count function within a head statement just to program defensively.

head(SparkR::count(groupBy(out_sd, "outcome")))...