Boosting: If You Get It Wrong, Just Boost and Try Again
What was the reason behind doing bagging, again?
If you trained up a bunch of decision stumps on the whole dataset over and over again, they'd be identical. By taking random selections of the dataset, you introduce some variety to your stumps and end up capturing nuances in the training data that a single stump never could.
Well, what bagging does with random selections, boosting does with weights. Boosting doesn't take random portions of the dataset. It uses the whole dataset on each training iteration. Instead, with each iteration, boosting focuses on training a decision stump that resolves some of the sins committed by the previous decision stumps. It works like this:
- At first, each row of training data counts exactly the same. They all have the same weight. In your case, you have 1000 rows of training data, so they all start with a weight of 0.001. This means the weights sum up to 1.
- Evaluate each feature on the entire...