I have trained custom SSD-Mobilenet-v1 model from Tensorflow object detection API and converted the frozen pb model to uff model. I successfully run it on the project GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.. But I found that the performance on tensorRT is not so good as it is when inferenced by Tensorflow. The confidence of detection is lower and location prediction is not so accurate.
I want to know why.
Hi,
A common cause is that your application applies different preprocess/postprocess to the detector.
Ex. color format, mean subtraction, …
Could you double check if the preprocess and postprocess in both frameworks are identical first?
Thanks.