Dear Nvidia Team,
I have trained my model on PeopleNet unpruned model around 1000 images(person,face,bag) but I am not getting good result on those images which was detecting good on PeopleNet before training.
Like :
So I am getting good result on a single image on peopleNet, but on the same image I am getting bad result after retraining the peopleNet on my custom dataset. but getting good result on my training data. what is the reason ? should I also include peopleNet training Data to get result like peopleNet.
Sorry for the confusion. PeopleNet is giving good results on the images which were part of the re-training data (as expected). But on the same unseen image (the third image in the original post, where the unpruned model provided by Nvidia was performing good) the re-trained model was not able to detect anything as you can see (the first image in the original post).
There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
More, how did you evaluate " PeopleNet is giving good results on the images which were part of the re-training data (as expected)"? Using tlt-evaluate?
Hi,
I have re-trained peoplenet with my custom dataset, however when I do tlt-infer I get wrong results i.e the chair as a person, in addition to detecting person as person
How can I avoid the wrong detections ?
Is there similar negative images concept like in yolo?.
This is a folder of images where you inform yolo to consider as negative images, and that minimises the wrong detections.
Thanks