Until now, we have seen single learning algorithms of growing complexity. Ensembles represent an effective alternative since they tend to achieve better predictive accuracy by combining or chaining the results from different data samples, algorithms settings, and types.
They divide themselves into two branches. According to the method used, they ensemble predictions:
Averaging algorithms: These predict by averaging the results of various parallel estimators. The variations in the estimators provide further division into four families: pasting, bagging, subspaces, and patches.
Before delving into some examples for both classification and regression, we will provide you with the steps to load a new dataset for classification, the Covertype dataset. It features a large number of 30×30 meter patches of forest in the US. The data pertaining to them is collected for the task of...