Well now I feel silly. This is a Slurm job scheduler in order to request the GPU via an interactive job, i.e., a srun
one needs to pass (at least) --gres=gpu:1
.
rk3199
16
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
nvidia-smi "No devices were found" error | 23 | 62660 | February 14, 2021 | |
Nvidia-smi “NVIDIA-SMI has failed because it couldn’t communicate with the NVIDIA driver. Make sure ..." | 11 | 2706 | January 10, 2022 | |
Nvidia-uninstall Received signal SIGSEGV; aborting | 15 | 3474 | October 12, 2021 | |
nvidia-smi: command not found but GPU works fine | 4 | 1684 | December 13, 2019 | |
Libcuda.so recreates device symlinks ad nauseam | 1 | 57 | April 10, 2025 | |
Nvidia-smi and nvaccelinfo doesn't work at OpenSUSE Leap Server 15.5 | 8 | 327 | October 10, 2024 | |
No devices were found" when running the nvidia-smi | 16 | 17385 | August 14, 2023 | |
Yet another nvidia-smi shows no devices were found - 4060ti Ubuntu | 8 | 600 | April 26, 2025 | |
Black screen with nvidia turned on instead intel and Failed to initialize NVML: Driver/library version mismatch | 17 | 8680 | April 12, 2022 | |
CUDA driver version is insufficient for CUDA runtime version [WSL2, Ubuntu 18.04] | 10 | 11364 | August 26, 2023 |