To assess the prediction power of a classifier, you can run a cross-validation method to test the robustness of the classification model. In this recipe, we will introduce how to use bagging.cv
to perform cross-validation with the bagging method.
In this recipe, we continue to use the telecom churn
dataset as the input data source to perform a k-fold cross-validation with the bagging method.
Perform the following steps to retrieve the minimum estimation errors by performing cross-validation with the bagging method:
First, we use
bagging.cv
to make a 10-fold classification on the training dataset with 10 iterations:> churn.baggingcv = bagging.cv(churn ~ ., v=10, data=trainset, mfinal=10)
You can then obtain the confusion matrix from the cross-validation results:
> churn.baggingcv$confusion Observed Class Predicted Class yes no no 100 1938 yes 242 35
Lastly, you...