How to interpret the reduction in error for progressive error splits in ada




I am trying to implement ada for one of the problems and here is my code to see the error reductions for iterations:

 **run the model on the training data**
    titanic.ada<-ada(survived ~ pclass + sex + age + sibsp, data=titanic.train, verbose=TRUE,
    **..and predict the test data**
    titanic.ada<-addtest(titanic.ada, test.x=titanic.pred[,-1], test.y=titanic.pred[,1])
 **error reduction for progressive splits**

    plot(titanic.ada, test=TRUE)

This graph means that when the number of iterations increase above 10 the error rates actually increase??
If I am not wrong adaboost works such that the correctly classified records do not update the weightage of the observations.Which if true would mean that only the records which get incorrectly classified get more attention at the next iteration and the records which were correctly classified do not get touched by the algo??
In such a scenario how is it possible that the error rate actually increased??
Can someone please explain what I am missing here??