Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.8.1
[*] DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
[*] Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
[*] DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
1.9.3.10904
[*] other
Host Machine Version
[*] native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
I am trying to build a docker container on the nvidia drive agx orin using a multistage built method where I first use
nvcr.io/nvidia/l4t-jetpack: as the base image. In the second stage I am using arm64v8/ros:humble as based image to leveraging Cuda/cudnn/Tensorrt and ROS to build a specific autoware environment to run it on the drive agx orin. (PS: i am copying all the necessary libs and recreating the symlinks so that they are available in the next stage)
Currently I am facing some issues during the build the process :
1. Initially I have tested all the existing tags of the jetpack like:
- nvcr.io/nvidia/l4t-jetpack:r36.2.0 → cuda 12.2
- nvcr.io/nvidia/l4t-jetpack:r35.4.1 → cuda 11.4
- nvcr.io/nvidia/l4t-jetpack:r35.3.1 → cuda 11.4
- nvcr.io/nvidia/l4t-jetpack:r35.2.1 → cuda 11.4
- nvcr.io/nvidia/l4t-jetpack:r35.1.0 → cuda 11.4
and for testing initially with just stage 1, I ran a simple cuda code, inside the container r36 was not compatible r35 lead to some JIT compiler errors but in all the other cases (35.3, 35.2, 35.1) I was able to access the GPU from inside the container. Cudnn tests also ran fine. But I am facing some issues related to Tensorrt which is itself failing my whole build while building specific autoware environment because some packages need tensorrt.
Just by building standalone this for testing versions of jetpack
by running the docker containers like:
docker run -it --gpus all --runtime nvidia nvcr.io/nvidia/l4t-jetpack:<tag_version> /bin/bash
I ran
find / -name libnvdla_compiler.so
but it was not to be found in (r35.3.1, r35.2.1, 35.1.0).
I tried to mount only this specific libraries those were missing inside the container like this,
docker run -it -v /usr/lib/libnvdla_compiler.so:/usr/lib/libnvdla_compiler.so --gpus all --runtime nvidia nvcr.io/nvidia/l4t-jetpack:r35.1.0 /bin/bash
I read that those low level libraries are generally flashed via SDK manager. Soon i encountered more issues related to more missing libs like: (libnvmedia.so
, libnvmedia_tensor.so
, libnvmedia_dla.so
, etc.)
Why in the first place they are not accessible from the host system inside the container.
2. I require assistance in updating CUDA from version 11.4 to 12.2 on my system, to ensure compatibility with the latest version of Autoware. Could you please provide guidance on the upgrade process?