How to convert model from pytorch to trt on xavier (or laptop/PC) and then run on jetson nano 4GB?

Description

A clear and concise description of the bug or issue.

Environment

TensorRT Version: 7.1.x.x
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Hi all,
I want to run the model of TRTModule of pytorch2trt on jetson nano. But in the first time, I could not convert from pytorch model to tensorrt on jetson nano 4GB (not enough memory althrough i extend swap memory)
I intend to convert the model on other device such as Xavier, laptop/PC and then run it again on Jetson Nano 4GB. I did it but i could not run successfully. Because conflict engine version (same tensorrt version and jetpack version between nano and xavier)

Thanks

Hi,

The generated engine files are not portable across platforms or TensorRT versions. TRT engine files are specific to the exact GPU model they were built on (in addition to the platforms and the TensorRT version) and must be re-targeted to the specific GPU in case you want to run them on a different GPU.

Thank you.

1 Like

How to do re-targeted to the specific GPU? You mean convert model in Jetson Nano? I could not do it due to not enough memory :(.

Yes, we need to generate on the same platform. We are moving this post to the Jetson Nano forum to get better help.

Thank you.

Hi,

As above mentioned, you will need to convert the model on Nano since TensorRT doesn’t support portability.
May I know which frameworks do you use? Do you use pure TensorRT or TRTorch?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.