CUDA not available as ggc_user on Jetson Nano Device

Hi everyone,

I am experiencing difficulties getting CUDA working correctly from my ggc_user on my Jetson Nano device.

I have established that the following architecture works from my root user account “ewan”.

JetPack: 6.1
Python Version: 3.10.12
PyTorch Version: 2.5.0
PyTorch CUDA Available: True
PyTorch CUDA Version: 12.6
Current CUDA Device: Orin
System CUDA Version: 12.6, V12.6.68
cuDNN: 9.6
Torchvision Version: 0.20.0
System Platform: Linux-5.15.148-tegra-aarch64-with-glibc2.35
CPU Architecture: aarch64

However when I log in as my ggc_user it correctly says the Python and PyTorch version but fails at “CUDA Available” - torch.cuda.is_available().

I have already added ggc_user to the video group.
I have double checked the environment variables including $PATH and $LD_LIBRARY_PATH.

When I run these commands as both root and as ggc_user I get the following output.

import os
import torch
print('PATH:', os.environ.get('PATH'))
print('LD_LIBRARY_PATH:', os.environ.get('LD_LIBRARY_PATH'))
print('CUDA_HOME:', os.environ.get('CUDA_HOME'))
print('Torch version:', torch.__version__)
print('CUDA available:', torch.cuda.is_available())
print('cuDNN enabled:', torch.backends.cudnn.is_available())
try:
    print(torch.cuda.get_device_properties(0))
except Exception as e:
    print('CUDA Device Error:', e)

Root:

PATH: /usr/local/cuda-12.6/bin:/usr/local/cuda-12.4/bin:/usr/local/cuda-12.4/bin:/usr/local/cuda-12.4/bin:/usr/local/cuda-12.4/bin:/home/ewan/.local/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
LD_LIBRARY_PATH: /usr/local/cuda-12.6/lib64:/usr/local/cuda-12.4/lib64:/usr/local/cuda-12.4/lib64:/usr/local/cuda-12.4/lib64:/usr/local/cuda-12.4/lib64:/usr/local/cuda/lib64:
CUDA_HOME: /usr/local/cuda
Torch version: 2.5.0
CUDA available: True
cuDNN enabled: True
_CudaDeviceProperties(name='Orin', major=8, minor=7, total_memory=7619MB, multi_processor_count=8, uuid=a881b265-40e5-5fd2-92dd-71b0d1fc443e, L2_cache_size=2MB)

ggc_user:


Error in cpuinfo: prctl(PR_SVE_GET_VL) failed
PATH: /usr/local/cuda-12.6/bin:/usr/local/cuda-12.6/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
LD_LIBRARY_PATH: /usr/local/cuda-12.6/lib64:/usr/local/cuda-12.6/lib64:/usr/local/cuda/lib64:
CUDA_HOME: /usr/local/cuda
Torch version: 2.5.0
CUDA available: False
cuDNN enabled: False
CUDA Device Error: Torch not compiled with CUDA enabled
 ls -l /dev/nvidia*
crw-rw-rw- 1 root root 195,   0 Jan 13 12:46 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Jan 13 12:46 /dev/nvidiactl
crw-rw-rw- 1 root root 195, 254 Jan 13 12:47 /dev/nvidia-modeset

I have visited this thread trying to understand the issue but I am still confused: CUDA not accessible for ggc_user

Is there something wrong with the paths or am I missing permissions?

Any help would be greatly appreciated!

Hi,

The error you met is:

CUDA Device Error: Torch not compiled with CUDA enabled

Based on this, please help to check if you are using the same PyTorch package when using root or USER account.

$ python3
Python 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch 
>>> print(torch.__file__)
/home/nvidia/.local/lib/python3.10/site-packages/torch/__init__.py

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.