I have write a machine learning program using tensorflow. where i use error function as mean square error and run the network using gradientdecentoptimizer to minimize error with learning rate 0.05 using the following section of code
error_function = 0.5 * tf.reduce_sum(tf.subtract(desired_outputs,logits) * tf.subtract(desired_outputs,logits))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(error_function)
for i in range(20000):
_, loss =[train_step, error_function],
feed_dict={inputs: np.array(training_inputs),
desired_outputs: np.array(training_outputs)})
For small data set(1000x16 input size) it is working properly but for little bit large(10000x16 input size) it is not working properly.
what i have noticed that value of loss it is display is reducing in every(not all) steps but sometimes it is increasing is it possible?