Please provide the following info (tick the boxes after creating this topic): Software Version
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other
Target Operating System
Linux
QNX
other
Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other
SDK Manager Version
1.9.3.10904
other
Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other
Hello
After looking at this thread:
I do not see any of the cuDNN and TensorRT headers in /usr/include/aarch64-linux-gnu on the Target , the are installed on the host after I installed the SDK via sdkmanger but I don’t even see CUDA-X AI as a target component. is cuDNN and TensorRT installed by sdkmanger?
Dear SivaRamaKrishnaNV
So is it normal that CUDA-X AI is not listed as one of the target components in the sdkmanager? The libs (.so) are installed but no headers are installed on the target. I did a find and it did not find anything. Do we need to copy the headers from the host system? which is fine if we need to do that but, I never had to do that before and I tried to go through the release notes and documentation but did not see anything.
Dear SivaRamaKrishnaNV
It is not just the samples, we build opencv, torch etc cuda , cudnn support since sometimes it is difficult to find pre built packages with the compatible versions of cuda. We do this on Jetson Orin all the time, I suppose I can just install those wheels and debs to avoid cross compiling all these different packages. So basically we can no longer compile any TRT applications directly on this target and it all has to be cross compiled on a host or a docker?
Will it be possible to install cudnn-local-repo-ubuntu2004-8.6.0.163_1.0-1_arm64.deb
On the Orin Drive to get full cuDNN development , but still I am not able to find a compatible TensorRT installer , I suppose I can use the debs from Jetpack where I install on Jetson Orin. Will that work?
Dear SivaRamaKrishnaNV
Thank you for the reply, I managed to get Pytorch and OpenCV directly compiled on the target after copying the appropriate headers/libs from host.
Python and tensroRT was a different story. There was no tensorrt.so in the dist package folder of python3.8 and I can not copy the host lib as it is x86. Or at least I could not find the aarch64 lib, but we were lucky to have a Jetson Orin that has everything installed on the target via JetPack and I copied those libs and it worked.
At lease we can , in a rush, just build and run on the target now after some “set ups” post flashing
Dear @servanti,
Thank you sharing your experience.
But you may encounter few issues if you use libs from Jetson release.
Could you share the steps you used to compile PyTorch/OpenCV on target to help others in community?
I could not find aarch64 tensorrt.so for python so I just figured I’ll try the one on my Jetson Orin. So far it works, if there is an “official” way to get that lib I’d be happy to try it,
Now as far as compiling cuDNN and TensorRT code on the target. These are the steps that I followed.
Tar all libcudnn* and libnvinfer* in target lib/ aarch64-linux-gnu and /etc/alternative in and all cudnn.* and nvinfer* include/ aarch64-linux-gnu on the host. These are the libs and headers. The idea is to untar these on the target in /usr/include/aarch64-linux-gnu and /usr/lib/aarch64-linux-gnu and /etc/alternatives. Do not copy or scp since tar preserves all the symbolic links.
Once this is done then the target has all the headers and libs for compiling cuDNN and TensorRT apps.
Test OpenCV in python:
python3 -c “import cv2; print(‘OpenCV version:’, str(cv2.version)); print(cv2.getBuildInformation())”
Pytorch:
Here is a script that builds the latest pytorch but it can be changed to download whatever version that you need.
Also , I think the cmake version “out of the box” on the target 3.16 is not sufficient for pytorch compile so I install cmake 3.27 from source: Installing | CMake
The option TORCH_CXX_FLAGS=D_GLIBCXX_USE_CXX11_ABI=1 is something we need in regards to compile and link with ROS (Long story ) , you can omit that