Determining the weights assigned to each model in Gradient Boosting

machine_learning
gradient_boosting

#1

Hi!
I have been trying to understand the gradient boosting algorithm in which I came across a fact that in GBM a new model let f2(x) is fit on the errors that resulted from the previous model f1(x) and then again a new model f3(x) is made to fit on errors of the previous model f2(x) and so on.
Then our final model f(x) is represented as weighted linear combination of all the models.
f(x) = a1f1(x) + a2f(2) + a3*f(x) + . . . . with a1 , a2 , a3 representing the weights of the respective models.
So how are these weights a1 , a2 , a3 , . . . are assigned to the corresponding models f1(x) , f2(x) , f3(x) , . . . ?