Using trtexec to convert yolov3.onnx to int8, fp16, engine in deepstream has the same effect, and the accuracy of detection is completely wrong

Using trtexec to convert yolov3.onnx into int8, fp16, engine in jetson-nx deepstream has the same effect, and the detection accuracy is completely wrong

sudo ./trtexec --onnx=./yolov3-416.onnx --batch=1 --workspace=1024 --int8 --calib=./calib_yolov3-416.cache --saveEngine=./yolov3-416-gpu.engine --verbose=True

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

The command you shared is for TensorRT.
Did you get the correct result with TensorRT?

This information can help to distinguish the cause is from Deepstream or TensorRT.

Thanks.