A Practical Implementation of the Faster R-CNN Algorithm for Object Detection in the link url https://www.analyticsvidhya.com/blog/2018/11/implementation-faster-r-cnn-python-object-detection/

Hi, i was trying to train Faster_RCNN in my own datasets for object detection.So for the sample i only created a datasets with 7 training image and 3 testing image.Then i created annotation.csv and annotation.txt file for training datasets and test_annotation.csv and test_annotation.txt file for testing datasets which both contain width, height, bounding box…like what you did in the example .
Therefore in order to first train CNN model i do not know how to use my datasets and also how to see training and accuracy value of trained CNN model before continuing to RPN model.

@samrawiter Are you sure, you want to train any kind of RCNN with the amount of data you have?

Because depending on your number of paramaters and whether you present artificial variants of your training set to the network, if might succeed, but I would highly recommend on presenting a larger amout of training samples, with a representative number of variations for each class.

This will let your network a chance to really adapt to the main problem and the data in a robust way.

For instance, if you have 120 3x3 filters and than 10 256x256 filters for your class representation, there is no reason, for the network, to overrepresent to exactly the examples you give it with many free parameters without any constraint (as far as I know from RCNNs).

You can try calculate yourself, how likeli it is that the network learns a particular combination of filters to exactly represent those classes, given, with just 7 training images, per iteration.

Maybe also depending on the number of objects in total, it could get better, but not very robust due to the number of variations, which can occur on testing and especially which could be learnen by the RCNN.

There are plenty of datasets especially for such problems one would face, using such a method.

RCNNs are pretty compact and a little less exansive that their couter parts.
Another option would be transfer learning with YOLO!

The most robust and different way of facing this, would be a combination with some Variational Inference.
The simplest example are Variational Auto Encoders coupled with another classifier.
Another option would be Bayes by Backprop.
Otherwise, for something really sophisticated, there is no way around Auto-Differenciate Variational Inference, which is not prone to Adam optimization (so it can be used in advance).

For just some examples to train RCNNs, I would recommend on using labeled biomedical dataset, like some x-ray slices, where the network has to locate deficits etc.

To find some data, Kaggle is first destination to go!

© Copyright 2013-2019 Analytics Vidhya