Lower detection rate on small objects after conversion to TensorRt

Hi there, my system configuration is is follows:

• Hardware Platform: 1080ti
• DeepStream Version: 6.0
• TensorRT Version: 8.2.0.6
• NVIDIA GPU Driver Version: 470.82.01
• Issue Type: questions

I am trying to create a face recognition application with deepstream 6.0. before recognizing faces, I need to detect them.

I use the “mobilenet0.25_Final.pth” pytorch weights which can be downloaded from here

now, I want to transform the pytorch weight to tensorrt. to do so, I first transform it to onnx using the python code: torch_to_onnx.py which can be downloaded from here

and it works perfectly, giving me a .onnx file at the output. then, I transform the generated .onnx file to tensorrt via the following command:

/usr/src/tensorrt/bin/trtexec --onnx=retina-mobile0.25-288x320.onnx   --saveEngine=retina-mobile0.25-288x320.engine

and the engine file “retina-mobile0.25-288x320.engine” is successfully generated at the end.
since this is a custom network, it requires a custom parser function. to do that, I use the custom parser function of this link.

finally, I use the config_detection.txt , and config_main.txt of this link

to perform detection.

the problem however is that detection results using the generated tensorrt engine “retina-mobile0.25-288x320.engine” and the initial pytorch weights “retina-mobile0.25-288x320.onnx” are not the same. in particular, the generated tensorrt engine fails to capture small faces. lets take a look at the detected results:

Result obtained via the generated tensorrt engine:

Results obtained from the initial pytorch weights (disregard the landmarks):

could you kindly provide me with a solution to this?

please take a look Pytorch model deployed on deepstream no predictions - #6 by pyml to make sure you set the gie key infer properities correctly.

Thanks!

Thank You for the reply!

I have taken a look at the parameters as mentioned in the input arguments of the pytorch detector code in (line 19 to 32) of this code which they use for their detector.
there seems to be multiple thresholds defined, such as: confidence_threshold, nms_threshold, and vis_thres.

on my deepstream config file on the other hand, It seems like that I only have “pre-cluster-threshold” and “post-cluster-threshold” (which are set at 0.6, and 0.4, respectively) to tune a little bit. when I set small values (i.e. 0.02) to these parameters, it adds some wrong results, but surprisingly, no improvement over small objects still (i.e. small faces are still not detected)

Hi @peyman.rostami ,
I created a FAQ - DeepStream SDK FAQ - #21 by mchi about this, please help take a loo if it can help this.

Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.