What Nvidia GPUs can I use for TensorRT execution provider in ONNX runtime?

I asked the same question [here] on stack exchange. I should have probably asked it here instead in the first place. (https://ai.stackexchange.com/questions/27980/what-nvidia-gpus-are-supported-by-tensorrt-execution-provider-in-onnx-runtime).

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

@NVES I do not have a model. I simply want to know what Nvidia GPUs can be used for ORT in combination with TensorRT EP.

Hi @timsxela,

We recommend you to go thorough support matrix doc to get more details. Which talks about supported platforms, features, and hardware capabilities of the TensorRT.

Thank you.

Hi @spolisetty, do the TensorRT EP for ONNX Runtime and the TensorRT inference engine both offer support to the exact same platforms, features and hardware capabilities? Or in other words, is your link valid for both the TensorRT EP and the TensorRT inference engine?

Thank you for your help.

Hi @timsxela,

We recommend you to post your concern on ORT discussion forum. You may get better help to know if any difference is there between TensorRT support matrix and ORT support platforms.

Thank you.