I some competitions, you will find only the test and the training datasets. These competitions would require you to submit your prediction for the test dataset and will then calculate your accuracy and give you the result. There is no way for you to check your accuracy on the test dataset before submission of your result.
In other competitions where you may be provided the validation dataset or are required to create one from the training datset is for the requirement of checking the accuracy of your model before predicting for the test dataset as you are already having the output (what you want to predict) for the cross validation set and you just need to compare that output with the output you get after predicting for the validation dataset.
So you use validation data in order to estimate how good your model has been trained (that is dependent upon the size of your data, the value you would like to predict, input etc) and to estimate model properties (mean error for numeric predictors, classification errors for classifiers, recall and precision for IR-models etc.)
This intermediate step on the validation set helps in avoiding problems such as overfitting on the train data.
After checking your accuracy on the validation dataset, you can be sure that you would get almost the same accuracy on your test dataset(unseen data) as well!
Hope this helps!