Book Image

Applied Supervised Learning with R

By : Karthik Ramasubramanian, Jojo Moolayil
Book Image

Applied Supervised Learning with R

By: Karthik Ramasubramanian, Jojo Moolayil

Overview of this book

R provides excellent visualization features that are essential for exploring data before using it in automated learning. Applied Supervised Learning with R helps you cover the complete process of employing R to develop applications using supervised machine learning algorithms for your business needs. The book starts by helping you develop your analytical thinking to create a problem statement using business inputs and domain research. You will then learn different evaluation metrics that compare various algorithms, and later progress to using these metrics to select the best algorithm for your problem. After finalizing the algorithm you want to use, you will study the hyperparameter optimization technique to fine-tune your set of optimal parameters. The book demonstrates how you can add different regularization terms to avoid overfitting your model. By the end of this book, you will have gained the advanced skills you need for modeling a supervised machine learning algorithm that precisely fulfills your business needs.
Table of Contents (12 chapters)
Applied Supervised Learning with R
Preface

Chapter 7: Model Improvements


Activity 12: Perform Repeated K-Fold Cross Validation and Grid Search Optimization

  1. Load the required packages mlbench, caret, and dplyr for the exercise:

    library(mlbench)
    library(dplyr)
    library(caret)
  2. Load the PimaIndianDiabetes dataset into memory from mlbench package:

    data(PimaIndiansDiabetes)
    df<-PimaIndiansDiabetes
  3. Set a seed value as 2019 for reproducibility:

    set.seed(2019)
  4. Define the K-Fold validation object using the trainControl function from the caret package and define method as repeatedcv instead of cv. Define an additional construct in the trainControl function for the number of repeats in the validation repeats = 10:

    train_control = trainControl(method = "repeatedcv",  number=5, repeats = 10,   savePredictions = TRUE,verboseIter = TRUE)
  5. Define the grid for hyperparameter mtry of random forest model as (3,4,5):

    parameter_values = expand.grid(mtry=c(3,4,5))
  6. Fit the model with the grid values, cross-validation object, and random forest classifier:

    model_rf_kfold<- train(diabetes~., data=df, trControl=train_control, method="rf",  metric= "Accuracy", tuneGrid = parameter_values)
  7. Study the model performance by printing the average accuracy and standard deviation of accuracy:

    print(paste("Average Accuracy :",mean(model_rf_kfold$resample$Accuracy)))
    print(paste("Std. Dev Accuracy :",sd(model_rf_kfold$resample$Accuracy)))
  8. Study the model performance by plotting the accuracy across different values of the hyperparameter:

    plot(model_rf_kfold)

    The final output is as follows:

    Figure 7.17: Model performance accuracy across different values of the hyperparameter

In this plot, we can see that we perform repeated k-fold cross-validation and grid search optimization on the same model.