I wonder how IRQ:s will be distributed in case you have more than one card or a chipset with builtin graphics as well as a dedicated card. Could someone with such setup(s) care to post the result of
cat /proc/interrupts | grep nvidia
(This would be related to realtime priority control)
I have two multi-card setups, but no on-board GPUs. The first system is a Core i7 ASRock Supercomputer X58 motherboard with a GTX 295 and a prototype GT200 card (approximately a GTX 260), so three devices total:
The second system is an AMD Phenom system with a Gigabyte MA790FX-DS5 motherboard that has two 8800 GTX cards installed, but there’s no nvidia line anywhere, which is odd:
The i7 looks interesting. It has three instances of the driver invoked (one for each card I suppose.) This is the scenario I was hoping for, especially since one of them already is on its own IRQ - which could be left alone as it is and used for display - while the other two could have their priorities raised above visuals, disk IO and whatnot. So far the theory works (which may still be wrong though.)
The AMD board is mystifying. Is this a remote machine at runlevel 3, not having its driver initialized yet?
Ah, good call. That system had been left in runlevel 3 after a driver upgrade. Interestingly, a client program of some kind must be running to see nvidia in the interrupt list. Running bandwidthTest, then immediately looking at /proc/interrupts still shows nothing. However, If I go to runlevel 5, then I see this:
The NVIDIA driver unloads most of the modules it uses automagically when no user space client program (either the X11 server or something else like nvidia-smi or a CUDA app) is connected to the card. That is why the hardware “disappears” when nothing is running on the card.