Problem in tensorflow, Cannot feed value of shape (140, 29, 29) for Tensor u’Placeholder:0′, which has shape ‘(?, 841)’

Note: Originally asked as a comment on article “An Introduction to Implementing Neural Networks using TensorFlow”

Dear Sir,
The problem of mine still exist.
This is the code that I am using

This is the dataset that I am using

When I try to run the code, this error message pop out
ValueError: Cannot feed value of shape (140, 29, 29) for Tensor u’Placeholder:0′, which has shape ‘(?, 841)’

at line 87
_, c =[optimizer, cost], feed_dict={x: train_x, y: train_y})

I still cannot solve the problem… Any help is great…
Thank You very much.

There are two issues in your code;

  1. You have defined all the parameters required to batch gradient descent, but haven’t done an explicit batch splitting. In the article, there’s a batch_creator function which does this for you

  2. As the value error is already suggesting you, the input you are giving to the placeholder, and the shape of that placeholder don’t match. That is why this error occurs. In the article batch_x = eval(dataset_name + '_x')[[batch_mask]].reshape(-1, input_num_units) reshapes the input so that it would fit the placeholder

Dear sir,
Thank you for your help.
I have tried to improve my code based on your advice.
After the improvement, 2 problem exists. The first thing seems to be a numpy bug, but I don’t known how to fix it…

  File "", line 122, in <module>
print "Validation Accuracy:", accuracy.eval({x: val_x.reshape(-1, 841), y: dense_to_one_hot(val_y)})
 File "", line 41, in dense_to_one_hot
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
IndexError: unsupported iterator index

The second problem is about the cost, I tried to train the NN for 5 Epoch, but it display to be that all the training cost is equal to 0.
I think that this is cause by me calling the wrong dataset for training (i.e. I am calling the placeholder which is all 0 for training instead of my dataset), but I cannot spot anything that I do wrong.

Once again, thank you for your help… I am really new to programming but I need to get this done. Any help would be great. Thank you very much.

Edit: I forgot to mention, the code that is improved still can be accessed by the link form the

  1. Hint: There’s a difference between numpy.flat and numpy.flatten. Check it out.

  2. Why are you giving wrong data for training? This in other word means that you are not letting it train! :smile:

I have just spotted that in the dense_to_one_hot, there is a num_classes

def dense_to_one_hot(labels_dense, num_classes=10):

This indicates that it is for a class-problem, which in my case I would like to do a regression.

This makes me believes that this is why the 2nd problem exist, as the logic is wrong starting from the dense_to_one_hot part, causing the wrong evaluation.
If I want to change the code from solving a class problem to a regression problem, what should I do?

I tried to find some example, but most of the example are for class problem from MNIST…

EDIT 1 : Another thing that maybe I should change is the cost function. From what I have read, softmax_cross_entropy_with_logits only work for classification type questions.

cost = tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits(logits=out_layer, labels=train_y))

But I have no idea what function should I change to. Can you suggest a function? And why?

Any help will be good… As I am really new to machine learning. Thank You.


I am also trying to implement this neural network. I have a similar problem as @jeffrey9909.
I am getting an error:
InvalidArgumentError (see above for traceback): logits and labels must be broadcastable: logits_size=[512,10] labels_size=[128,10]

From what I have read on the discussion of this tutorial, is that this is caused by batch_x and batch_y not being the same length. I confirm that they are not the same length. However, I do not know how to get them the same length.

He created a batch_creator function which should handle this. Here is the batch_creator function:

def batch_creator(batch_size, dataset_length, dataset_name):
"""Create batch with random samples and return appropriate format"""
batch_mask = rng.choice(dataset_length, batch_size)

batch_x = eval(dataset_name + '_x')[[batch_mask]].reshape(-1, 784)
batch_x = preprocess(batch_x)

if dataset_name == 'train':
    batch_y = eval(dataset_name).ix[batch_mask, 'label'].values
    batch_y = dense_to_one_hot(batch_y)
return batch_x, batch_y

I do not understand the line:
batch_x = eval(dataset_name + '_x')[[batch_mask]].reshape(-1, 784)

At this point the batch_x size is incorrect [512, 784] and the batch_y size is [128,10].

I am trying my best to understand what the batch_x line is doing, but cannot seem to figure it out and do not know how to properly modify the size. I also get deprecation warnings for this line of code.

If you could please help me work through this problem it would be greatly appreciated. I have been trying for a long time and just cannot get it.

Thank you

© Copyright 2013-2019 Analytics Vidhya