Description
A clear and concise description of the bug or issue.
Environment
TensorRT Version: 7.1.x.x
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Hi all,
I want to run the model of TRTModule of pytorch2trt on jetson nano. But in the first time, I could not convert from pytorch model to tensorrt on jetson nano 4GB (not enough memory althrough i extend swap memory)
I intend to convert the model on other device such as Xavier, laptop/PC and then run it again on Jetson Nano 4GB. I did it but i could not run successfully. Because conflict engine version (same tensorrt version and jetpack version between nano and xavier)
Thanks