Here’s the setup. This is a new system that I’ve been configuring for the first time today.
CentOS 5.5 64-bit, 4 x C2050s, SELinux disabled, NVIDIA Developer Drivers version 3.2 (64 bit), Cuda Toolkit 3.2.9 (64 bit), SDK 3.2
The problem I’m having is that following a reboot, normal users can not run any CUDA programs. Attempts to run deviceQuery print the following:
CUDA Device Query (Runtime API) version (CUDART static linking)
cudaGetDeviceCount FAILED CUDA Driver and Runtime version may be mismatched.
Press to Quit…
I have also noticed that in /dev , the expected nvidia0-3 and nvidiactl entries are not present. However, if I log in as root and run deviceQuery, it works. The /dev/nv* entries get created, and I can now run CUDA programs as a normal user. Alternatively, I can log in as root and run the script posted here: http://forums.nvidia.com/index.php?s=&…st&p=272085 . This loads the /dev/nv* entries and permits me to run CUDA programs as a normal user.
The trouble is that every time I reboot, I have to either run that script or execute some CUDA program as root. I don’t understand why this is necessary. I can add the script to my startup routine, but this feels like a messy workaround. I must have done something wrong during the installation - otherwise everyone would be having this problem. Can anyone advise what I should check to determine why the /dev/nv* entries aren’t getting created when an unpriviledged user tries to invoke a CUDA program following a reboot?
PS. Thanks to jfvillal for posting a followup to his question yesterday. That helped me find the script referenced above.