TensorRT model inference result is not correctly

Description

I have a tensorflow regression model. I first convert this model to ONNX model. Then I convert the onnx model to the TensorRT model. But when I want to make an inference for TensorRT model, I get wrong results. Could you help?

Note : I am developing on Jetson Nano

I use this when converting tensorflow model to onnx model:

!python3 -m tf2onnx.convert --save-model my_tensorflow_model --opset 13 --output my_onnx_model.onnx

I use this when converting onnx model to tensorrt model:

! /usr/src/tensorrt/bin/trtexec --onnx=my_onnx_model.onnx --saveEngine=my_trt_model.trt --explicitBatch

Environment

TensorRT Version: 8.0.1.6
GPU Type: Nvidia Tegra X1 - on Jetson Nano
Operating System + Version: Ubuntu 18.04.5 LTS
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 2.6.2

Hi,

Could you please compare onnx-runtime output with the actual model to make sure there is no issue with the ONNX model?
If not then, we recommend you to please try on the latest TensoRT version, if you still face this issue, please share with us the issue repro ONNX model, and script (compares output) for better debugging.

Thank you.