Cross Validation : Leave one out v/s K-Fold



I am new to machine learning and trying to clear my concepts

  1. Leave one cross validation : We leave one point out (validation) , train for n-1 points . This is done for all n points at the end of the process the errors are averaged out. However How do I find my best model here ? I do not seem to understand.

  2. K-Fold cross validation: For K fold we split the data into K-1 folds (train) , 1 (validation set) and do this for 1 to K folds Eg.( 1 to 6 folds ) . Is my understand correct here ?
    do we need to change a parameter here for 1 to 6 folds, at the end of every iteration average out the results ? In which case does this not become same as nested cross validation ?

Lastly what is the difference between grid search v.s nested cross validation ?