Why it is necessary to use cross validation approach to find the tuning parameter?

lasso
crossvalidation

#1

I am currently studying about lasso regression.

lasso regression - In statistics and machine learning, lasso (least absolute shrinkage and selection operator) (also Lasso or LASSO) is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the statistical model it produces.

Here Lambda is the tuning parameter.I want to know why it is necessary that we have to use only cross-validation approach to finding the lambda why we can not use other approaches like AIC, BIC


#2

I am assuming the question is for Lasso regression.

AIC ascertains the goodness of the model based on the likelihood estimate and the number of parameters present in the model. AIC rewards the goodness of fit but it also includes a penalty that is an increasing function of the number of parameters.

In case of Lasso regression, we are already have a penalty parameter which will penalize more when there are more number of parameters (can be seen from the equation).

So it may not be a good option to use AIC when we already have a penalty function inbuilt in Lasso. I think this is why cross validation is preferred over AIC.

Thanks,
SRK