Problem with cudaGetDeviceCount returned 802 error

The latest version of CUDA (11.6) and its bundled driver update (510.39.01) on systems with a single GTX 1050 Ti (or GTX 1650) generate “cudaGetDeviceCount returned 802” even though nvidia-smi works. What does this mean?

This is a problem that appears to have started with CUDA 11.3.

And there is also no problem on systems that also include a newer device. For example,

[root@node2056 ~]# nvidia-smi -L
GPU 0: NVIDIA GeForce GTX 1050 Ti (UUID: GPU-8b0954cb-9072-7574-a1fe-7aaa8210b97b)

[root@node2056 deviceQuery]# ./deviceQuery
./deviceQuery Starting…

CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 802
→ system not yet initialized
Result = FAIL

[root@node2056 ~]# nvidia-smi
Tue Jan 25 15:54:00 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 510.39.01 Driver Version: 510.39.01 CUDA Version: 11.6 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … Off | 00000000:07:00.0 Off | N/A |
| 30% 25C P0 N/A / 75W | 0MiB / 4096MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+

This is a problem that appears to have started with CUDA 11.3. And on another system that has multiple GPU devices it is not a problem, e.g.,

[root@ldas-pcdev13 ~]# nvidia-smi -L
GPU 0: NVIDIA GeForce GTX 1050 Ti (UUID: GPU-c417268e-53d8-77d0-dc80-5323ce279565)
GPU 1: Tesla V100-PCIE-16GB (UUID: GPU-3fbb2a42-ab69-aabf-c395-3f5c943dc939)
GPU 2: NVIDIA GeForce GTX 1060 6GB (UUID: GPU-f72438e5-c483-ff4e-15a2-6648f98aabd7)
GPU 3: NVIDIA GeForce RTX 2080 Ti (UUID: GPU-066b6317-7b5b-028a-ed11-7ee416adb71f)

[root@ldas-pcdev13 ~]# nvidia-smi
Tue Jan 25 16:10:29 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 510.39.01 Driver Version: 510.39.01 CUDA Version: 11.6 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … Off | 00000000:02:00.0 Off | N/A |
| 30% 40C P0 N/A / 75W | 483MiB / 4096MiB | 37% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 1 Tesla V100-PCIE… Off | 00000000:03:00.0 Off | 0 |
| N/A 33C P0 53W / 250W | 823MiB / 16384MiB | 21% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 2 NVIDIA GeForce … Off | 00000000:81:00.0 Off | N/A |
| 0% 47C P2 70W / 180W | 501MiB / 6144MiB | 47% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 3 NVIDIA GeForce … Off | 00000000:82:00.0 Off | N/A |
| 41% 50C P2 71W / 260W | 716MiB / 11264MiB | 6% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 347955 C …nv-teobresums/bin/python3 481MiB |
| 1 N/A N/A 346338 C …nv-teobresums/bin/python3 819MiB |
| 2 N/A N/A 346361 C …nv-teobresums/bin/python3 499MiB |
| 3 N/A N/A 347954 C …nv-teobresums/bin/python3 713MiB |
±----------------------------------------------------------------------------+

[root@ldas-pcdev13 deviceQuery]# env CUDA_VISIBLE_DEVICES=3 ./deviceQuery
./deviceQuery Starting…

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: “NVIDIA GeForce GTX 1050 Ti”
CUDA Driver Version / Runtime Version 11.6 / 11.6
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 4040 MBytes (4235984896 bytes)
(006) Multiprocessors, (128) CUDA Cores/MP: 768 CUDA Cores
GPU Max Clock rate: 1468 MHz (1.47 GHz)
Memory Clock rate: 3504 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 1048576 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 98304 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 2 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.6, CUDA Runtime Version = 11.6, NumDevs = 1
Result = PASS

Please enable the nvidia-persistenced to start on boot, make sure it is continuously running and check if that resolves the issue.

Starting nvidia-persistenced did not help (even with a reboot),

[root@node2056 deviceQuery]# uptime
 11:32:03 up 10 min,  1 user,  load average: 0.71, 0.60, 0.46

[root@node2056 deviceQuery]# ps -ef | grep persistenced
root        2531       1  0 11:22 ?        00:00:00 /usr/bin/nvidia-persistenced
root        6151    3239  0 11:32 pts/0    00:00:00 grep --color=auto persistenced

[root@node2056 deviceQuery]# ./deviceQuery 
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 802
-> system not yet initialized
Result = FAIL

[root@node2056 deviceQuery]# nvidia-smi -L
GPU 0: NVIDIA GeForce GTX 1050 Ti (UUID: GPU-8b0954cb-9072-7574-a1fe-7aaa8210b97b)

Good morning,
Thanks for your suggestion about enabling Nvidia-persitenced to start upon reboot. Unfortunately this did not resolve the problem. I’d appreciate other suggestions!
Thanks,
Sharon

This is a driver error. The company does not want to admit that it has this bug and does not want to release a patch for a broken Linux driver

1 Like

Did you find a solution to this?

Is this still an issue in 565? I mean, it feels like it should be a fairly easy fix to do.