I think that a model saturates when your training accuracy becomes close to the CV error, thus there is little scope of improvement. But, my thought process got challenged recently.
I faced an issue wherein I was using a RF model and got high accuracy (~95%) but low CV score(~80%). I thought the model is overfitting and I started tuning parameters - min_samples_split, min_samples_leaf, max_depth, max_features. The accuracy went down but the CV score also went down. I was expecting CV to increase as I try to reduce overfitting but this wasn’t the case.
Does this mean that RF has saturated? I doesn’t look this way to me but almost nothing worked after trying for hours. Max I got is accuracy 91%, CV 78% which is pretty far I guess. Is this normal or I am missing a trick here?