The random forest algorithm is modern, versatile, robust, accurate, and is deserving of consideration for nearly any new classification task that you might encounter. It won't always be the best algorithm for a given problem domain, and it has issues with high dimensional and very large datasets. Give it more than 20-30 features or more than, say, 100,000 training points and it will certainly struggle in terms of resources and training time.
However, the random forest is virtuous in many ways. It can easily handle features of different types, meaning that some features can be numerical and others can be categorical; you can blend features such as number_of_logins: 24 with features such as account_type: guest. A random forest is very robust to noise and therefore performs well with real-world data. Random forests are designed to avoid overfitting, and therefore...