Hi,
Is it possible to convert a PyTorch model to TensorRT on the host machine and run/use it on the Jetson Nano?
I tried to do it directly on the Jetson Nano, but the process gets killed due to low memory. So is there a way to cross-compile the code/models on the host machine or are there any other solutions like using the Jetson containers on the host machine to generate the TensorRT engine?
Please note that I already tried to run the TensorRT model/engine converted on host PC (without cross-compiling) on the Jetson, but got compute capability mismatch error.
Hi @Gemm, the TensorRT engine needs to be built on the same type of GPU as the one that you will run it on. For example, you can copy serialized TensorRT engines between Nano devices, since they have the same GPU. However you can’t copy a TensorRT engine between Nano and a PC or Nano and a Xavier, because they use different GPUs.