TensorRT 8 : C++ inference gives different results compared to tensorflow python inference

Description

Hi there,
I got a saved model converted to onnx in order to run inference using TensorRT c++ api;
but ouput results are different compared to python inference and I don’t why. seems like values are slightly shifted.

infos:

  • tensorrt 8.2
  • cuDNN 8.2
  • onnx 1.8
  • cuda 11,
  • and did not set the BuilderFlag::kFP16 because my machine does not have that.

any help will be more than appreciated!

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi,
Thank you for you help, aprreciate it!
Here are screenshots of the model from netron.app:

and attached, the example from tensorRT that i am using .onnxInference.cpp (4.0 KB)

and yes, I am going to test your 2 suggessions as well!
Thank you!

Hi,

Could you please let us know the which version of TenosrRT are you using. We recommend you to please try latest TRT version 8.2 EA. Also we recommend you to please verify result using onnx-runtime to confirm if problem is with onnx model.

If you still face an issue, please share us onnx model and sample data to try from our end for better debugging.

Thank you.

Hi,
Thank you for your response; and yes, i was already using TRT 8.2.0.6 EA for windows.
I have also tested onnx checker which successfully passed.
As I explained earlier, my algorithm is for image ring corrections but results from c++ TRT inférence are a bit different compared to python tensorflow inference which has better results (seems like values are a bit shifted and i dont really know why); the saved model is trained by python tensorflow and converted with tf2onnx.
I am not very sure about sharing the file onnx model; need to ask to my superior.

Are you getting correct results with onnx-runtime ?

Hi,
No, I did not set onnx-runtime yet; I am a bit new on this related to onnx TRT c++ inference; but yeah i can pull onnx-runtime and give a try.

Could you please try and confirm to us. To make sure model is working fine.

Thank you.