When comparing the random forest results with the bagging counterpart for the German credit data and Pima Indian Diabetes datasets, we did not see much improvement in the accuracy over the validated partition of the data. A potential reason might be that the variability reduction achieved by bagging is at the optimum reduced variance, and that any bias improvement will not lead to an increase in the accuracy.
We consider a dataset to be available from the R core package kernlab
. The dataset is spam and it has a collection of 4601 emails with labels that state whether the email is spam or non-spam. The dataset has a good collection of 57 variables derived from the email contents. The task is to build a good classifier for the spam identification problem. The dataset is quickly partitioned into training and validation partitions, as with earlier problems:
> data("spam") > set.seed(12345) > Train_Test <- sample(c("Train","Test"),nrow(spam),replace = TRUE,...