Urban Sound Challange Hackathon

data_science

#1

I found the urban sound challenge an interesting problem to learn how to work with sound samples. It also consists of a decently sized dataset which makes it feel like a real problem.
I have been working with it for the past few weeks and I have an interesting observation. In spite of trying very hard to increase my accuracy score using feed forward NN I am not able to break the 60% mark. Today I tried splitting the training set to create a test set instead of using the provided test set and using it I got an accuracy score of 90%.
How can we explain this?


#2

Hi @hbagchi,

I think your model is overfitting. We usually split the dataset into train and test (or validation) set to check the performance of out model. So when you make some minor changes in your model or perform parameter tuning, you can choose to check the improvement in your model using the validation set.

If you have a very high score on the validation set but not a similar input on the test set, it implies that your model is overfitting on the train dataset.


#3

Hi @AishwaryaSingh

Thanks for replying. Let me reiterate - “…I tried splitting the training set to create a test set instead of using the provided test set and using it I got an accuracy score of 90%…

In short, I trained on 80% of the training set and used the remaining 20% of the training set as test set. I did not use validation set. With this approach I got 90% accuracy on the test set (20% of the training set).

However, the same model (trained on 80% training set) delivers an accuracy of less than 60% with the provided test set.

Moreover, the UrbanSound8K site says this - "If you reshuffle the data (e.g. combine the data from all folds and generate a random train/test split) you will be incorrectly placing related samples in both the train and test sets, leading to inflated scores that don’t represent your model’s performance on unseen data. Put simply, your results will be wrong"

I don’t get this!


#4

I understood this part. You must have used the train test split. *0% of the data for training and 20% of the data for testing. The remaining 20 % of the train set is what we also call validation set.

Your model is overfitting on the training data. Read more about overfitting in this article.

When you split your data into train and test set, the split can be in two ways, either it is shuffled (randomly rows are selected for test set) or not shuffled (the selection for train and test in order).


#5

Hi @AishwaryaSingh
Thanks again for your reply. I have published the code
Python Code using 20% of Training set as Test Set
If you run this code, it will give approximately 90% accuracy consistently.

Meanwhile I will check your link on overfitting.

Thanks again.


#6

I changed the branch name.Please use this link -
Python Code using 20% of Training set as Test Set. Also, edited link in above post.