Custom trained SSD-Mobilenet-v2 output detections are jittery.

I trained a custom trained SSD-Mobilenet-v2 single class detector and trying to implement the same on deepstream. The model works however the output detections seems to be larger than needed and seems to be all over the place centred around the object. I will attach a video on what I am trying to explain.
for_nvidia.avi (4.36 MB)

Hi,

Are you using this sample for your customized model?
/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD/

If yes, it’s recommended to check if any required update in the network width/height first.
Thanks.

Yes. I am using the same. There is no update in the network height/width.They are the same 300x300. Except the number of classes everything remains the same in the customised model.

Yes. I am using the same. There is no update in the network height/width.They are the same 300x300. Except the number of classes everything remains the same in the customised model.

Hi AAstaLLl,

I figured out the solution to this problem. Change the minSize parameter in the config.py file used when converting the frozen graph .pb file into uff using convert_to_uff.py. The default minSize would be 0.2. Changing it to 0.05 resolves the problem.

Any idea why keeping the minSize at 0.2 causes this problem?