Text summarization - Encoder Decoder Using Attention Model

Ideally in Encoder Decoder with attention , the attention takes into consideration the importance of words in input for that particular word in output. Hence the decoder processes the output one time-step after another with output of 1st time-step serving as input to second time-step and so on . Programatically , should this be done in a loop for each output time-step?

© Copyright 2013-2019 Analytics Vidhya