I would recommend you to re-iterate the model building life cycle again rather focusing on “XGBoost” only to improve the accuracy of a model. I always follow below mentioned approach and it worked for me very well:
Problem Identification and Hypothesis generation
Identify the problem first (If you have the domain experience, great)
Generate Hypothesis (Which features can impact the target variables). Caution! you should perform this step without looking at the data.
Data Exploration and Pre processing
Data Exploration (Exploring the hidden trend), Missing and outlier treatment
Feature Engineering (Create new variable from existing variable(s), here step 2 will give you idea to generate new variables)
Select the right validation set, it help you to avoid over-fitting
Select the right algorithm (Some time logistic regression delivers better result compare to others)
Do the parameter tuning using GridSearch although with experience you can do this tuning process manually.
Try multiple algorithms and check the Cross validation and leader board score
Do ensemble of multiple algorithms output
Finally, do the prediction for test data set. I would also suggest that understand the pros and cons of algorithm before using it. It will help you to understand that which algorithm will work for which type of problem (data set).
Hope this helps!