I am developing on a custom Jetson AGX Xavier. I am not using the Dev Kit, but MIC-730AI (MIC-730AI - AI Inference System based on NVIDIA® Jetson AGX Xavier™ - Advantech).
My own program was initially tested on Dev Kit, and I am trying to replicate it on MIC. My Dev Kit is installed with Jetpack 4.6, which has CUDA 10.2, CUDNN 8.2.1, TensorRT 8.0.1.
I was not able to flash MIC with Nvidia SDK manager, because the board support package for Dev Kit is not the same as that for MIC-730AI. I had to flash using a script supplied by the Vendor. The script is called Jetpack 4.6 V1. disk flash. The issue, however, is that it comes with no CUDA, CUDNN or TensorRT.
I am trying to install CUDA 10.2 firstly, but I can’t find an aarch64 version here: CUDA Toolkit 10.2 Download | NVIDIA Developer
How do I proceed for CUDA 10.2? And how would I proceed from there to install CUDNN 8.2.1 and TensorRT 8.0.1, which are the other packages in Jetpack 4.6?
Also, I have found a series of files here:
Some of them look correct, like those under “Jetpack 4.6”, “common”. Would some of the files here help me? And also what does t210, t194, t186 mean?
You can install the libraries by apt directly.
First, please check the L4T version via the below command:
$ cat /etc/nv_tegra_release
Then following the instructions shared below to install the libraries.
Thank you for the reply. After running
$ cat /etc/nv_tegra_release
I get the reply
# R32 (release), REVISION: 6.1, GCID: 27863751, BOARD: t186ref, EABI: aarch64, DATE: Mon Jul 26 19:36:31 UTC 2021
I recognize the t186, and I guess I am L4T R32.6.1?
I have further questions:
- I don’t see the relevance of searching up to L4T version. How does it affect my further steps
- Your link linked me to a page about OTA updates that is not CUDA/CUDNN/TensorRT specific. It shows me a list of components we can install with apt, but this is for host, not for edge. I am trying to install for the edge device directly. Can you advice me what is the installation process I am supposed to refer to? Apologies, I am completely new to this.
- Can you give a high level overview of how you would install CUDA 10.2, CUDNN 8.2.1 and TensorRT 8.0.1? Is all you do just apt install a list of packages in sequence? Also I see zero reference to TensorRT in the link you shared.
- If I wanted to avoid installing all this, is a container from NVIDIA L4T TensorRT | NVIDIA NGC able to work on AGX Xavier?
1. The L4T version of your environment is
2. The OTA feature can install user-space libraries for both the host and Jetson.
Please give it a try.
3. The libraries have dependencies on the OS version.
To get a higher libraries version, you will need a newer L4T branch as well.
4. Container can work on Jetson.
But for the r32 users, the libraries are mounted from the Jetson natively directly.
So you will need to install the libraries as well.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.