How to give negative data-set while training model on TLT

Hello Morganh,
I am using TLT resnet 10 and 18 but not getting good result I have following class for training
1.car 2.Bus 3.Truck 4.LCV 5.Van 6.JCB 7.Autorickshaw 8.Bike

getting good result on Bus and bike but on other class like on Truck it is making box of lcv, jcb, truck and some time van and same for JCB, lCV and car so how should I go so that I can get good result.
and how can I give negative dataset in transfer learning toolkit while training model. ?

Hi pritam,
Could you please explain more for “negative dataset”?
Does it mean as below if there are some negative samples generated by TLT?
For example,
positive samples for cars: you already prvoides some cars images. And the label are car.
negative samples for cars: the images whose label are not car.

Hi pritam,
Is this still an issue to support? Any result can be shared?

Thank you Morganh,
Apologise for late reply, Negative data mean that I am training my model with four classes like car, truck, bus, lcv and I wanted to give some annotation(Dontcare) which look like car,truck,bus,lcv but not are exactly that. so that when I run inference then Dontcare will not be detected and inference ignored that type of data.
So how should I set configuration file for detectnetv2 resnet-18. I should include that dontcare in configuration file or not?
please help.

So you mean negative sample. But detectnet_v2 does not have negative sample during training.
FRCNN/SSD do have negative/background sample.

In detectnet_v2, if you set the “dontcare” class in the spec, then the training will train this class too.

Thanks morganh for response.
Which meta-architectures we need to use for object detection Detectnetv2, FRCNN or SSD for resnet18 to get good result.?

You can have a try for SSD.

thanks morganh.
I will try with that.

Hi @Morganh,
Is there any comparative analysis done for SSD resnet10 and detectnetv2 resnet10 when trained with TLT in terms of speed and accuracy? (especially for object detection)

I did similar experiments with resnet18 against KITTI dataset. Actually you can also trigger jupyter notebook to train on your side. I did not tune any hyperparameters for both networks. From the results, Both can get about 80% of mAP after training. For fps, it depends on how much the tlt model is pruned. Detectnet_v2 can reach more than 300fps under fp16@bs2 at Xavier. SSD can reach more than 260fps. Detectnet_v2 can still reach more than 75% of mAP after retraining.