Why high model accuracy compared to very low validation accuracy?



I’m building a sentiment analysis program in python using Keras Sequential model for deep learning

my data is 20,000 tweets:

positive tweets: 9152 tweets
negative tweets: 10849 tweets

I wrote a sequential model script to make the binary classification

model.add(Embedding(vocab_size, 100, input_length=max_words))
model.add(Conv1D(filters=32, kernel_size=3, padding=‘same’, activation=‘relu’))
model.add(Dense(250, activation=‘relu’))
model.add(Dense(1, activation=‘sigmoid’))
model.compile(loss=‘binary_crossentropy’, optimizer=‘adam’, metrics=[‘accuracy’])

Fit the model

history=model.fit(X_train[train], y1[train], validation_split=0.30,epochs=2, batch_size=128,verbose=2)

however I get very strange results! The model accuracy is almost perfect (>90) whereas the validation accuracy is very low (<1) (shown bellow)

Train on 9417 samples, validate on 4036 samples
Epoch 1/2

  • 13s - loss: 0.5478 - acc: 0.7133 - val_loss: 3.6157 - val_acc: 0.0243
    Epoch 2/2
  • 11s - loss: 0.2287 - acc: 0.8995 - val_loss: 5.4746 - val_acc: 0.0339

I tried to increase the number of epoch, and it only increases the model accuracy and lowers the validation accuracy

Any advice on how to overcome this issue?

Thank you!


Hi @amy.dj

The data is not preprocessed correctly. I would suggest you to look into it and then fit the data.


I don’t have a pre processing step … I only use word embrdding (word2vec)


Hi @amy.dj,

What @AishwaryaSingh essentially means is that whatever operations that you have done on the training dataset, should be replicated on the testing dataset.

For example, you may have done a feature scaling step, where you normalized all the variables in train set in range 0 to 1. This step should be done on the test data too, because then only the model would be able to understand the relations.

There could also be multiple reasons for your model to perform incorrectly, such as

  • Less number of epochs which induces underfitting
  • Incorrect splitting mechanism for training and validation
  • Hyperparameters of your model are not tuned

and etc.

What I would recommend you is to debug your neural network, and get to know what is going wrong inside it. You can check out this article for reference