What are the main differences between Adaboost and Gradient Boosting
In simple terms, GB works with residual errors based on decision tree. And Adaboost works with combination of weak learners.
In GB while building the next weak learner we also add the residual error of the previous learner also… Am I correct?
Along with what @mahesh51 has mentioned, you can also refer the below mentioned article which explains both of these algorithms with example: