In previous chapters, we introduced many classification methods; each method has its own advantages and disadvantages. However, when it comes to the problem of how to choose the best fitted model, you need to compare all the performance measures generated from different prediction models. To make the comparison easy, the caret package allows us to generate and compare the performance of models. In this recipe, we will use the function provided by the caret
package to compare different algorithm trained models on the same dataset.
Perform the following steps to generate an ROC curve of each fitted model:
- Install and load the library,
pROC
:
> install.packages("pROC")> library("pROC")
- Set up the training control with a 10-fold cross-validation in three repetitions:
> control = trainControl(method = "repeatedcv", + ...