Inference of model using tensorflow/onnxruntime and TensorRT gives different result

Hi,

We have confirmed this is an application issue rather than a TensorRT error.

To align the input pre-processing, we rewrite your app with python interface.
After that, we can get the same output as onnxruntime.

Please check if there is any difference in the OpenCV pre-processing.

trt.py.txt (1.7 KB)

$ /usr/src/tensorrt/bin/trtexec --onnx=model_tf_float_opset10.onnx --minShapes=128,64,3  --optShapes=128,64,3 --maxShapes=128,64,3 --dumpOutput --saveEngine=model.trt
$ python3 trt.py

Thanks.