A random forest is a set of random decision trees (similar to the ones described in the previous chapter), each generated on a random subset of the data. A random forest classifies the feature to belong to the class that is voted for by the majority of the random decision trees. A random forest tends to provide a more accurate classification of a feature than a decision tree because of the decreased bias and variance.
In this chapter, you will learn:
- Tree bagging (or bootstrap aggregation) technique as part of random forest construction, but that can be extended also to other algorithms and methods in data science to reduce the bias and variance and hence to improve the accuracy
- In example Swim preference to construct a random forest and classify a data item using the constructed random forest
- How to implement an algorithm in Python that would construct a random...