In this chapter, we learned that a random forest is a set of decision trees, where each tree is constructed from a sample chosen randomly from the initial data. This process is called bootstrap aggregating. Its purpose is to reduce variance and bias in classifications made by a random forest. Bias is further reduced during the construction of a decision tree by considering only a random subset of the variables for each branch of the tree.
We also learned that once a random forest is constructed, the result of the classification of a random forest is a majority vote from among all the trees in a random forest. The level of the majority also determines the level of confidence that the answer is correct.
Since random forests consist of decision trees, it is good to use them for every problem where a decision tree is a good choice. As random forests reduce the bias and variance that exists in decision tree classifiers, they outperform decision tree algorithms.
In the next chapter, we will...