I’m working with an Nvidia Jetson Xavier NX for object recognition and, to optimize the process, I’d like to use the GPU, but I’m having problems. I flashed and configured the card using a LInux 20.04 Host PC with the SDK Manager and installed version 5.1 of JetPack along with DeepStream 6.2. (In version 5.1.3 of Jetpack, after sudo apt upgrade, it was no longer possible to boot).I followed the same steps as in this tutorial for configuration: https://www.youtube.com/watch?v=Ucg5Zqm9ZMk
Afterwards, I installed the CUDA 12.1 toolkit, as indicated below, and then pytorch.
However, even though I updated the export path, when I run the nvcc --version command, it shows that the installed version of CUDA is 11.4. Also, when I call the torch.cuda.is_available() function, I get False.
I’d like to know what you need to do to actually use the GPU during testing.