Book Image

Learning Predictive Analytics with R

By : Eric Mayor
Book Image

Learning Predictive Analytics with R

By: Eric Mayor

Overview of this book

This book is packed with easy-to-follow guidelines that explain the workings of the many key data mining tools of R, which are used to discover knowledge from your data. You will learn how to perform key predictive analytics tasks using R, such as train and test predictive models for classification and regression tasks, score new data sets and so on. All chapters will guide you in acquiring the skills in a practical way. Most chapters also include a theoretical introduction that will sharpen your understanding of the subject matter and invite you to go further. The book familiarizes you with the most common data mining tools of R, such as k-means, hierarchical regression, linear regression, association rules, principal component analysis, multilevel modeling, k-NN, Naïve Bayes, decision trees, and text mining. It also provides a description of visualization techniques using the basic visualization tools of R as well as lattice for visualizing patterns in data organized in groups. This book is invaluable for anyone fascinated by the data mining opportunities offered by GNU R and its packages.
Table of Contents (23 chapters)
Learning Predictive Analytics with R
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Exercises and Solutions
Index

Cross-validation and bootstrapping of predictive models using the caret package


In this section, we will discuss how to examine the reliability of a model with cross-validation. We start by discussing what cross-validation is.

Cross-validation

You might remember that, in several chapters, we used half of the data to train the model and half of it to test it. The aim of this process was to ensure that the high reliability of a classification, for instance, was not due to the fitting of noise in the data rather than true relationships. We have seen, for instance, in the previous chapter, that the reliability of a classification on the training set is usually higher than in the test set (unseen data).

The process of using half of the data for training and half for testing is actually a special case of cross-validation, that is, two-fold cross-validation. We can perform cross-validation using more folds. Two very common approaches are ten-fold cross-validation and leave-one-out cross-validation...