TensorRT models not compatible with on another device

I have converted an onnx model to TRT 8.5.2 using trtexec.
the conversion occurred on a server and could run inference on NVIDIA GPU.
However, when deploying the model on Jetson AGX orin device, it shows error and cannot run.
Is it necessary to convert a model on the same device where inference will run?

Yes. For Jetson we need to build the TensorRT engine on the same device.
For more information, please refer to the following similar post:

Thank you.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.