Very valid point I must say. But honestly, things are a bit fuzzy in this case. There is no real definition or boundary defining a ‘complex’ model. Model selection depends on many more factors other than #data points or #features like what’s the problem at hand, what’s the relation between variables, the type of variables (categorical/continuous).
I think @shuvayan has a very valid point. The problem at hand play a big role. For instance, if you practice on Kaggle datasets, you mostly go for higher accuracy and use as good a model as you can.
But that’s not always the case. I used to work in pharmaceutical analytics some time back and there we had an interesting problem. We had to make a predictive model which could be coded into an MS Excel application so that the medical representatives can use it to make predictions on the fly. In this case, we were stuck with logistic regression and decision tree because models above these are not very intuitive to code from scratch. Also, interpretability was a major concern. The model should make practical sense in such applications.
To summarise, there is no definite answer here. You should also check out ensemble techniques:
These involve making a variety of models and then combining there results to get a better prediction.
Hope this helps!