Tensorrt conversion of a model for jetson nano on host device

Description

If Tensorrt conversion of a model for jetson nano on host device is possible having same environment or any kind of simulated environment
A clear and concise description of the bug or issue.

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,

The generated engine files are not portable across platforms or TensorRT versions. TRT engine files are specific to the exact GPU model they were built on (in addition to the platforms and the TensorRT version) and must be re-targeted to the specific GPU in case you want to run them on a different GPU.

Thank you.

Hi @spolisetty thanks for the reply. We want to generate specific model for jetson nano … doesnt need it to be portable but if it is possible to generate through simulated nano hw or any other technique possible rather generating directly on nano

Sorry, we have to generate an engine on Jetson Nano.

Hi NVIDIA,
Can you please verify if TensorRT model conversion (creation of .engine-file)) MUST take place on the “hardware platform”, even with the latest releases of Jetpack/TensorRT? If so, are there any plans to make model conversion in a virtual environment on another “hardware platform” (or will that never be possible)?
Thanks.