I converted the centerface onnx file to trt using trtexec, through tensorrt module and through nvinfer runtime conversion in deepstream but the converted model is giving wrong detection bboxes and landmarks point whereas the inference through onnx is correct.
Onnx file link : CenterFace/centerface.onnx at master · Star-Clouds/CenterFace · GitHub
TensorRT version : tried on 7.0.0 and 7.2.2
dGPU : T4 and P40
Input image shape : 640 x 480
Hi,
could please provide the setup info as other topics?
do you mean providing trt engine to nvinfer gets wrong output, but providing onnx to nvinfer can get correct output?
Providing onnx to nvinferserver gives correct output. If i give onnx file in nvinfer , it implicitly builds a trt engine for that and infer on it which gives wrong output.
On this I have a question,
have you tried explicit conversion to TRT before giving it to Nvinfer -OR- you let it to nvinfer to convert on the fly from ONNX and generate TRT?
Yes , i also tried using engine file build through trtexec and InsightFace-REST/build_centerface_trt.py at master · SthPhoenix/InsightFace-REST · GitHub but the output was same for all cases
I think this indicates the model is correct, did you check if the model output parser of nvinfer is correct ?
I am using the same customparser in nvinferserver and its working fine. Moreover, In nvinferserver also its giving different output for onnx and trt file, so i guess issue is not with ouptut parser.
I am attaching the sample output for onnx and trt file for your reference. First one is TRT output and second is onnx output
Di you solve the issue?
I am facing the exact same issue for the same model.
Thanks
No , still to be resolved
for centerface TRT, please check if you can refer to No Detection for Centerface Model Inference with Deepstream(for Centerface) - #3 by mchi
I checked with these steps on DS-5.0 container its still giving wrong output there but working fine with DS-5.1 so i guess this is the version issue.
Anyway this issue has been resolved.
Thanks