i got a saved model converted to onnx in order to run inference using Tensorrt 8.6 c++ API.
The model is a virtual model from the openai clips model.
checking the embedded value of the image in the model.
but output results are diffrence compared to python
infrence(pt or onnx runtime) to trt in tensorrt c++ api
I checked and found that python inference is correct and tensorrt c++ api is incorrect. (both fp32)
The next log is trtexec verbose log after onnx check. You can check it (
TensorRT Version: 8.6.1GA
GPU Type: GTX3090
Nvidia Driver Version:
CUDA Version: 12.0
CUDNN Version: 18.104.22.168
Operating System + Version:
Python Version (if applicable): 3.10.1
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 2.0
Baremetal or Container (if container which image + tag):
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered