Problem in tensorflow, Cannot feed value of shape (140, 29, 29) for Tensor u’Placeholder:0′, which has shape ‘(?, 841)’

tensorflow

#1

Note: Originally asked as a comment on article “An Introduction to Implementing Neural Networks using TensorFlow”

Dear Sir,
The problem of mine still exist.
This is the code that I am using

This is the dataset that I am using
https://drive.google.com/file/d/0B9brAmtid-xTbEV3SDdudEJCZzA/view?usp=sharing

When I try to run the code, this error message pop out
ValueError: Cannot feed value of shape (140, 29, 29) for Tensor u’Placeholder:0′, which has shape ‘(?, 841)’

at line 87
_, c = sess.run([optimizer, cost], feed_dict={x: train_x, y: train_y})

I still cannot solve the problem… Any help is great…
Thank You very much.


#2

There are two issues in your code;

  1. You have defined all the parameters required to batch gradient descent, but haven’t done an explicit batch splitting. In the article, there’s a batch_creator function which does this for you

  2. As the value error is already suggesting you, the input you are giving to the placeholder, and the shape of that placeholder don’t match. That is why this error occurs. In the article batch_x = eval(dataset_name + '_x')[[batch_mask]].reshape(-1, input_num_units) reshapes the input so that it would fit the placeholder


#3

Dear sir,
Thank you for your help.
I have tried to improve my code based on your advice.
After the improvement, 2 problem exists. The first thing seems to be a numpy bug, but I don’t known how to fix it…

  File "prediction.py", line 122, in <module>
print "Validation Accuracy:", accuracy.eval({x: val_x.reshape(-1, 841), y: dense_to_one_hot(val_y)})
 File "prediction.py", line 41, in dense_to_one_hot
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
IndexError: unsupported iterator index

The second problem is about the cost, I tried to train the NN for 5 Epoch, but it display to be that all the training cost is equal to 0.
I think that this is cause by me calling the wrong dataset for training (i.e. I am calling the placeholder which is all 0 for training instead of my dataset), but I cannot spot anything that I do wrong.

Once again, thank you for your help… I am really new to programming but I need to get this done. Any help would be great. Thank you very much.

Edit: I forgot to mention, the code that is improved still can be accessed by the link form the prediction.py
https://drive.google.com/open?id=1jga0cxaCK48xkt8raL75CUw3vUqJ2P843EoBmJDc4tc


#4
  1. Hint: There’s a difference between numpy.flat and numpy.flatten. Check it out.

  2. Why are you giving wrong data for training? This in other word means that you are not letting it train! :smile:


#5

I have just spotted that in the dense_to_one_hot, there is a num_classes

def dense_to_one_hot(labels_dense, num_classes=10):

This indicates that it is for a class-problem, which in my case I would like to do a regression.

This makes me believes that this is why the 2nd problem exist, as the logic is wrong starting from the dense_to_one_hot part, causing the wrong evaluation.
If I want to change the code from solving a class problem to a regression problem, what should I do?

I tried to find some example, but most of the example are for class problem from MNIST…

EDIT 1 : Another thing that maybe I should change is the cost function. From what I have read, softmax_cross_entropy_with_logits only work for classification type questions.

cost = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(logits=out_layer, labels=train_y))

But I have no idea what function should I change to. Can you suggest a function? And why?

Any help will be good… As I am really new to machine learning. Thank You.