Book Image

Applied Supervised Learning with R

By : Karthik Ramasubramanian, Jojo Moolayil
Book Image

Applied Supervised Learning with R

By: Karthik Ramasubramanian, Jojo Moolayil

Overview of this book

R provides excellent visualization features that are essential for exploring data before using it in automated learning. Applied Supervised Learning with R helps you cover the complete process of employing R to develop applications using supervised machine learning algorithms for your business needs. The book starts by helping you develop your analytical thinking to create a problem statement using business inputs and domain research. You will then learn different evaluation metrics that compare various algorithms, and later progress to using these metrics to select the best algorithm for your problem. After finalizing the algorithm you want to use, you will study the hyperparameter optimization technique to fine-tune your set of optimal parameters. The book demonstrates how you can add different regularization terms to avoid overfitting your model. By the end of this book, you will have gained the advanced skills you need for modeling a supervised machine learning algorithm that precisely fulfills your business needs.
Table of Contents (12 chapters)
Applied Supervised Learning with R
Preface

Hold-One-Out Validation


In this technique, we take the k-fold validation to the logical extreme. Instead of creating k-partitions where, k would be 5 or 10, we choose the number of partitions as the number of available data points. Therefore, we would have only one sample in a partition. We use all the samples except one for training, and test the model on the sample which was held out and repeat this n number of times, where n is the number of training samples. Finally, the average error akin to k-fold validation is computed. The major drawback of this technique is that the model is trained n number of times, making it computationally expensive. If we are dealing with a fairly large data sample, this validation method is best avoided.

Hold-one-out validation is also called Leave-One-Out Cross-Validation (LOOCV). The following visual demonstrates hold-one-out validation for n samples:

Figure 7.6: Hold-one-out validation

The following exercise performs hold-one-out or leave-one-out cross-validation...