De

Description

We have a pre-trained segementation model with pytorch, to make it work also with libtorch for deployment on product, we used jit to convert the model.

After converting, we compared the performance, and found that the transfer learning speed is decreased(nearly half) with the convered model using libtorch on Jetson Xavier.
So the question is, is this as expected on Jetson Xavier? Any recommanded pratice for converting the model for usage with libtorch from pytorch on Jetson Xavier?

Environment

TensorRT Version: 7.1.3.0
GPU Type: Xavier NX
Nvidia Driver Version: Jetpack 4.4.1
CUDA Version: 10.2.89
CUDNN Version: 8.0.0.180
Operating System + Version: Ubuntu 18.04
PyTorch Version (if applicable): 1.9
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
This looks like a Jetson issue. Please refer to the below samlples in case useful.

For any further assistance, we recommend you to raise it to the respective platform from the below link

Thanks!