Accuracy with re-training SSD network from Jetson Inference

Hi all,

I am having a (maybe) strange issue. I have about 800 images of an object (person) that I have retrained the SSD network on using the steps provided by the Jetson Inference package. I have built models from both 30 and 100 epochs. Both of these models are unable to even detect the person from the same images that the model was trained on.

I was expecting it would be able to pick up every person it was specifically trained on.

Could you please let me know if this is normal, and if not, what I can do to fix this? I do want the model to be applicable to images that it was not trained on, but if it cannot even detect a person in which it was trained on, I don’t see if working well on other persons.

Thanks! All comments are appreciated.

Hi @jcidoniwalker, if you are testing your trained ONNX model with detectnet/detectnet.py, can you try deleting the *.engine file from your model’s folder? It will then re-generate the TensorRT engine the next time you run detectnet. It’s possible it was using an old version of the model that you had exported.

Next, I would try using the run_ssd_example.py script on one of your PyTorch checkpoints, and see if that’s able to detect any objects. This script can be used to test your trained PyTorch model before it gets exported to ONNX.

Hi @dusty_nv,

Thanks for your response. I did achieve massive increase in detections by deleting the *.engine file. If I run it against the same set of images that it was trained against, should I expect it to get a detection every time at 100%?

No not necessarily, because even during training it probably did not detect 100% of the training dataset.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.