Training with Adversarial/False Positive Data

We are currently looking into running experiments utilizing data that contains images with no true positives but has objects that are known to cause FP cases during inference. Our question is what is the required label for these images. We know that TLT does not accept empty kitti label files for training, so should we annotate and label these false positives within the images? Furthermore, the class for ‘background’ already exists in FRCNN and SSD models, would it be appropriate to label these FP objects in our training set as ‘background’ and would the network be able to interpret this correctly?

For this case, try to create another class, called “object-background” and take all the false positives and assign them to this new class.
But for long term, the best strategy to reduce FP would be to retrain the network from scratch with a multiple classifiers and add the desired ones in there as well. I suggest re-introducing a previous dataset from ImageNet or Pascal with 10-20 pre-existing classifiers and add your own ones and retrain it. For example, see https://developer.nvidia.com/blog/preparing-state-of-the-art-models-for-classification-and-object-detection-with-tlt/, train classification models on ImageNet 2012 with 10~20 pre-existing classifiers and also your own ones. Then use the pretrained weights to train TLT faster_rcnn or SSD object detection models.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.