Does TensorRT support cross-platform under Ampere architecture?


I used TensorRT8.4.1.5 version on ubuntu18.04 to convert the onnx model into a trt model, and found that it can also run normally under windows10. The graphics card used in ubuntu is 3090, and the graphics card used in windows is 3090ti. I checked the official documentation and it says “By default, TensorRT engines are only compatible with the type of device where they were built. With build-time configuration, engines can be built that are compatible with other types of devices. Currently, hardware compatibility is supported only for Ampere and later device architectures and is not supported on NVIDIA DRIVE OS or JetPack.” Does this mean Ampere is cross-platform supported? Hope to get everyone’s answer. Thanks.


TensorRT Version:
GPU Type: 3090 and 3080ti
Nvidia Driver Version:
CUDA Version: 11.1
CUDNN Version: 8.2.4
Operating System + Version:
Python Version (if applicable): 3.8
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

This is the onnx I use and the trt model converted under ubuntu. This trt can run normally under windows.
test.onnx (27.9 MB)
test.trt (15.7 MB)

Please refer to the following section of the developer guide, which may answer your queries: