Let’s say we have developed a model which is predicting properly both on training and test data. We have automated the model and delivered to client. Will it perform well in all the situation? If not how to make it stable.
The model may need recalibration when new data is available because future contingencies are not necessarily reflected in past data. If you don’t want to wait, then contaminate your present data with noise and see how your model performs.
Thanks for your reply. However if you can give more insight with an example will be helpful.