I read somewhere that random forests work on the principle of taking random sample of variables to form decision trees. Although decision trees overfit, using random forests averages this effect of overfitting as each of the trees in the forest overfit differently which averages out the overall effect.
Does this mean that random forests would always perform better than decision trees on any model?
Or are there situations where decision trees would perform better than random forests. If not, then do we use decision trees just to plot them and extract the information about the significance of the variables?