General considerations, to begin with, we choose the number of the trees that we are going to construct for a decision forest. A random forest does not tend to overfit (unless the data is very noisy), so choosing many decision trees will not decrease the accuracy of the prediction. However, the more decision trees, the more computational power is required. Also, increasing the number of the decision trees in the forest dramatically, does not increase the accuracy of the classification much. It is important that we have sufficiently many decision trees so that most of the data is used for the classification when chosen randomly for the construction of a decision tree.
In practice, one can run the algorithm on a specific number of decision trees, increase their number, and compare the results of the classification of a smaller and a bigger forest...