Tensorrt 8.6 GA : C++ Inference gives diffrence results compared to onnx || pt model python inference


Hi there

i got a saved model converted to onnx in order to run inference using Tensorrt 8.6 c++ API.
The model is a virtual model from the openai clips model.

checking the embedded value of the image in the model.

but output results are diffrence compared to python
infrence(pt or onnx runtime) to trt in tensorrt c++ api

I checked and found that python inference is correct and tensorrt c++ api is incorrect. (both fp32)

The next log is trtexec verbose log after onnx check. You can check it (


TensorRT Version: 8.6.1GA
GPU Type: GTX3090
Nvidia Driver Version:
CUDA Version: 12.0
CUDNN Version:
Operating System + Version:
Python Version (if applicable): 3.10.1
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 2.0
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet


import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

I shared the trtexec log and model through the URL in my question. Please check again

log : https://drive.google.com/file/d/1carAjQ_oP2xEkia48J0tNztQ5meukaqZ/view?usp=drive_link
onnx model :


I am unable to access the ONNX model, could you please give me permission?

Thank you.