Compiling application on Orin card


I am trying to build a TensorRt project in Eclipse with the insight plug-in.
The target is a remote ORIN card
The problem is that the compilation is done on the local machine
and I don’t have the ARM libraries of TRT on this machine.

Could you point me to an explanation on how to do this?


TensorRT Version: 8.4
GPU Type: Orin, Ampere
Nvidia Driver Version:
CUDA Version: 10.4
CUDNN Version: 8.3
Operating System + Version: Linux
Python Version (if applicable): NA
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

  1. On Linux Ubunto 18 x64
  2. Inatall Eclipse + NSight plugin
  3. Create a remote SSH connection to an ORIN card
  4. In the Eclipse create hello world project and import TRT include files & libraries
  5. Set project architecture to ARM 64


  1. The project doesn’t compile


Sorry for the delayed response. Could you please let us know which Jetson platform are you using?
Ex: Jetson Orin Nano or Jetson AGX Orin ?

Thank you.

Jetson AGX Orin

We are moving this post to the Jetson AGX Orin forum to get better help.

Dear @coyosi,
Do yo see any compilation error? Coud you please share logs. Also, Could you check cross compiling TRT sample directly to make sure no issue with environment. Check Sample Support Guide :: NVIDIA Deep Learning TensorRT Documentation for guidance TensorRT sample cross compilation

Please close this issue.
I will try to connect to the card using other ways

Thank you,

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.