In classification, we can have mis-classification costs in some applications like healthcare. e.g. when predicting whether a person will get cancer, the cost of a false negative is very very high than the cost of a false positive.
Along the same lines, are there asymmetric costs while predicting a numeric output. e.g. when predicting future sales using regression, the cost of overshooting the actual can be higher than the cost of a conservative prediction. If yes, is this feature readily available as a built in argument in R (i.e when we use the function like lm(target~.,…) can we directly optimize the model w.r.t the asymmetric costs.