How to compare AdaBoost and Gradient Boosting based on different data sets?

boosting
machine_learning

#1

Hi everyone,
I have recently stepped into bagging and boosting and am encountering a basic difficulty in understanding the logic of selecting the use cases of both the algorithms.
While AdaBoost takes weak learners and keeps on remodeling by assigning weights to the misclassified observations, Gradient boosting reduces the loss function by finding a correlation between the error and the output variable. How to interpret this information practically? How can I identify whether to use gradient boosting or AdaBoost in a classification problem? Or would I just have to go via hit and trial method?

Thanks in advance!