Jetson上安装TensorRT容器后,编译/运行C++程序,未成功,该该怎么解决呢?

Hardware: Jetson Orin NX,Software: Ubuntu22.04,JetPack6。

Download docker image nvcr.io/nvidia/tensorrt:24.08-py3-igpu and create container。Compile program in container, report an error:


Download docker image nvcr.io/nvidia/tensorrt:24.08-py3 and create container。Run program in container(parameters: -s /root/yolov8n.trt10.wts /root/yolov8n.trt10.engine n ), report an error:

My program is download from: https://github.com/wang-xinyu/tensorrtx, and I upload it here:
yolov8_trt10.zip (83.2 KB)

  1. Which image link should I download for Jetson, with igpu or not?
  2. Do you have the file libnvdla_compiler.so in Ubuntu 22.04?
  3. How to solve the running error?
    Any suggestions are welcome,Thank you for your help。

Moving to Orin NX forum.

Hi,

1. Container with iGPU tag is for Jetson.

2. You can find it in the below link:
https://repo.download.nvidia.com/jetson

3. How do you launch the container?
Please run it with --runtime=nvidia to mount some driver libraries for Jetson.

Thanks.

Install nvdla_compiler failed and which one should be downloaded and installed in the web site: Index? My Jetson is Orin NX,flash to JetPack6 。 Could you please guide the installation method?

Run the script successfully, but do not know if it is correct and needed.

wget https://repo.download.nvidia.com/jetson/t234/pool/main/n/nvidia-l4t-core/nvidia-l4t-core_36.1.0-20231206095146_arm64.deb
apt-get install ./nvidia-l4t-core_36.1.0-20231206095146_arm64.deb

Jetson Orin NX,flash to JetPack6 。
Download docker image and create container, the file libnvdla_compiler.so can not be found.

docker pull nvcr.io/nvidia/tensorrt:24.08-py3-igpu
docker run --runtime nvidia -it --rm nvcr.io/nvidia/tensorrt:24.08-py3-igpu
ldd /usr/lib/aarch64-linux-gnu/libnvinfer.so

Who can suggest the cause of the problem, or provide the correct script?

Hi,

Could you try l4t-tensorrt:r8.6.2-devel?

Thanks.

Yes, I have tried it, remember it is fine, and the version 24.05-py3-igpu seems to be fine. Their tensorRT versions are all 8.

But our ultimate goal is to use the latest version of TensorRT, we have tested both versions of 10.3 and 8.6.1.6 on Windows, and the 10.3 version has about 4% -5% improvement.

Now we are using TensorRT 8.6.1.6 in Jetson Orin NX, and want to upgrade to version 10.3 to improve speed. Don’t know how to use the newer version of the TensorRT container, can you help to find a way?

Hi,

Do you need a container solution?

We have verified that the TensorRT package can be installed on Jetson directly.
Please find the steps below (you can change to a newer version in a similar manner):

Thanks.

Thanks for the reply, but after brushing, installing Cuda12.6.1, and installing TensorRT10.3 in Ubuntu 22.04, I failed again (libnvdla_compiler.so => not found). Do you have this file? Can you directly send this file and its dependent file to us?

Flash:
Jetson Flash .txt (1.3 KB)


The Ubuntu 20.04 virtual machine in the Windows10 is used for flashing. Is there a problem?

Install Cuda and TensorRT:
Cuda12.6.1TensorRT10.3.0InstallScriptsForJetson.txt (1.4 KB)

Our board is like this:


Does this board support TensorRT 10.3.0 running?

Hi,

You can find nvidia-l4t-dla-compiler_36.3.1-20240516220919_arm64.deb in the below link:

https://repo.download.nvidia.com/jetson#Jetpack%206.0

Thanks.

After the installation, we do have this file.

The installation file we use is: nvidia-l4t-dla-compiler_36.3.0-20240719161631_arm64.deb.

The one you provided nvidia-l4t-dla-compiler_36.3.1-20240516220919_arm64.deb is not tested, but it is probably ok.

Hi,

Just want to double-confirm.
Does TensorRT work in your environment after installing the nvidia-l4t-dla-compiler package?

Thanks.