I have a GTX280 that I use only for scientific calculations. For running my display I use an old NVIDIA graphics card. To use the GTX280, I need to run a startup script, then the GTX280 works fine. But, as soon as I run a program on the GTX280, the graphics cards’ fans turn off (I can tell by the big difference in sound). Is this strange? Is it supposed to happen? Will I overheat my GTX280 this way?
I use linux Fedora 8 and the script (which I got from this forum) starts like
Startup/shutdown script for nVidia CUDA
chkconfig: 345 80 20
description: Startup/shutdown script for nVidia CUDA
I see now that the fan also turns off when I use the GTX280 as my primary graphics driver. During bootup, all fans on the card will be on full blast, and then soon the internal fan towards the monitor-connector end turns off. The large fan at the other end of the card stays on. Perhaps this is normal operation, I don’t know. I just installed the latest drivers, 177.67, and I am attaching a bug report.
A related question: I see that the current drivers do not work for my FX5200 (the one I had been using as a monitor driver). And the current toolkit and SDK do not work with the old driver that is used for the FX5200. So now I must use my GTX280 card as my main graphics card, which means I am limited to 5 second runs on a kernel. My question is, will future versions of the toolkit get around the 5-second problem? I do not want to go out and by a new card just to run the monitor. Also, if I did buy a new card, and it was also PCI-E, how do I tell my machine to use the new card as the monitor driver and the GTX280 card for CUDA programs? I use Linux Fedora 8. With the FX5200 it was easy, as the bios has a switch for PCI vs PCI-E.
With respect to the fan, that is expected behavior. The fan is not shutting off, it is clocking down once the driver is in use, and will speed up as needed for dynamic cooling.
NV3x (GeForce 5200 is NV34) GPUs are no longer supported in the CUDA release drivers. There are legacy driver branches which will continue to support NV3x GPUs, however they will not have support for newer GPUs (such as the GTX280).
The 5 second watchdog cannot be worked around.
The CUDA SDK samples provide examples how to code for multiple GPUs.