We currently use the 4.6.2 Jetpack on our Jetson Xavier NX (16Gb emmc and 8Gb RAM), which works fine. We would like to switch to a 16Gb RAM version. According to this [I have stucked When I use jetson-inference on xavier nx 16GB], because of TensorRT, we are being forced to change to the 5.0.1 jetpack.
However, the SDK Manager is unable to install Jetpack 5.0.1 on Jetson Xavier NX with 16Gb emmc because of lack of space. We have also tried doing this manually i.e. using ‘sudo apt install nvidia-jetpack’ with the same problem. One thing we have also tried, is using the ‘sudo apt-get purge $(cat nvubuntu-focal-packages_only-in-desktop)’ command to remove GUI as suggested here [Guide to Minimizing Jetson Disk Usage] but we are still running out of space.
So we would like to know if there is something we are doing wrong here. Is there a way to get TensorRT to work with the 16Gb RAM Xavier NX or a way to fully install the 5.0.1 jetpack on the Xavier NX?
Yes, but we still need this to fit on a production module that still only has 16Gb emmc space. Are you suggesting that we create the system on the nvme drive, reduce the size by following this [Guide to Minimizing Jetson Disk Usage] and then reflash it into a 16Gb emmc production device?
So to confirm. The new SDK provided by NVIDIA’s Jetpack 5.0.1 is not meant to work with its emmc-based production version of NVIDIA’s Jetson NX. Am I missing something here? Doing tasks such as deep learning inference is dependent on the SDK, is it not?
Thanks for the reply. Are referring to using the SDK manager? If this is the case I am yet to find the appropriate configuration that was installable (size-wise) on the 16Gb emmc version. What I have been able to do is create the original image on a 128 Gb sd card and then reduce the size using this [Guide to Minimizing Jetson Disk Usage] to less than 16 Gb. This seems like the only path I can think of. However, I am now not sure of how to convert the sd card-based image to an image that is installable on the production version.
At the very least is there documentation on how to implement an inference module on a 5.0.1 jetpack based 16Gb NX card? My initial question still remains, is there a way to only install the necessary modules (i.e. TensorRT etc.), which are required for inference, using the SDK manager?