Another approach to regularization involves creating ensembles of models and combining them, such as by model averaging or some other algorithm for combining individual model results. As with many of the previous regularization methods, model averaging is a fairly simple concept. If you have different models that each generate a set of predictions, each model may make errors in its predictions, but they might not all make the same errors. Where one model predicts too high a value, another may predict one that's too low, so that, if averaged, some of the errors cancel out resulting in a more accurate prediction than would have been otherwise obtained.
To better understand model averaging, let's consider a couple of different but extreme examples. In the first case, suppose that the models being averaged are identical or at least generate identical predictions (that is, perfectly correlated). In that case, the average will result in no benefit, but also no harm...