Boosting methods essentially use trees as base learners, and hence the idea of variable importance gets carried over here the same as with trees, bagging, and random forests. We simply add the importance of the variables across the trees as we do with bagging or random forests.
For a boosting fitted object from the adabag
package, the variable importance is extracted as follows:
> AB1$importance x1 x2 100 0
This means that the boosting method has not used the x2
variable at all. For the gradient boosting objects, the importance is given by the summary
function:
> summary(sin_gbm) var rel.inf x x 100
It is now apparent that we only have one variable and so it is important to explain the regressand and we certainly did not require some software to tell us. Of course, it is useful in complex cases. Comparisons are for different ensembling methods based on trees. Let us move on to the next section.