IRQ on multiple cards

I wonder how IRQ:s will be distributed in case you have more than one card or a chipset with builtin graphics as well as a dedicated card. Could someone with such setup(s) care to post the result of

cat /proc/interrupts | grep nvidia

(This would be related to realtime priority control)

I have two multi-card setups, but no on-board GPUs. The first system is a Core i7 ASRock Supercomputer X58 motherboard with a GTX 295 and a prototype GT200 card (approximately a GTX 260), so three devices total:

[stan@open ~]$ cat /proc/interrupts | grep nvidia

177:		 32	 521942		  0		  0		  0		  0		  0		  0   IO-APIC-level  ehci_hcd:usb1, uhci_hcd:usb8, nvidia

185:		 60	1044162		  0		  0		  0		  0		  0		  0   IO-APIC-level  uhci_hcd:usb3, ahci, nvidia, nvidia

The second system is an AMD Phenom system with a Gigabyte MA790FX-DS5 motherboard that has two 8800 GTX cards installed, but there’s no nvidia line anywhere, which is odd:

[stan@grad08 ~]$ cat /proc/interrupts

		   CPU0	   CPU1	   CPU2	   CPU3	   

  0: 3456455034		  0		  0		  0	IO-APIC-edge  timer

  1:		  2		  0		  0		  0	IO-APIC-edge  i8042

  7:		  0		  1		  1		  1	IO-APIC-edge  parport0

  8:		  1		  0		  0		  0	IO-APIC-edge  rtc

  9:		  3		  0		  0		  0   IO-APIC-level  acpi

 50:		272		  0		  0		  0   IO-APIC-level  ohci_hcd:usb2, ohci_hcd:usb4, ahci

 58:	   5388		277   16620397	  87667   IO-APIC-level  ahci

 66:		 88		  0		  0  214039831		 PCI-MSI  eth0

217:		316		  0		  0		  0   IO-APIC-level  ohci_hcd:usb1, HDA Intel

225:		523   34904632		  0		  0   IO-APIC-level  ohci_hcd:usb3, ohci_hcd:usb5, ahci

233:		 37	   1994	 259600	2462867   IO-APIC-level  ehci_hcd:usb6

NMI:	1308249	1902052	1073844	 159191 

LOC: 3456306870 3456306817 3456306741 3456306638 

ERR:		  0

MIS:		  0

Thankyou seibert!

The i7 looks interesting. It has three instances of the driver invoked (one for each card I suppose.) This is the scenario I was hoping for, especially since one of them already is on its own IRQ - which could be left alone as it is and used for display - while the other two could have their priorities raised above visuals, disk IO and whatnot. So far the theory works (which may still be wrong though.)

The AMD board is mystifying. Is this a remote machine at runlevel 3, not having its driver initialized yet?

Ah, good call. That system had been left in runlevel 3 after a driver upgrade. Interestingly, a client program of some kind must be running to see nvidia in the interrupt list. Running bandwidthTest, then immediately looking at /proc/interrupts still shows nothing. However, If I go to runlevel 5, then I see this:

root@grad08 ~]# cat /proc/interrupts  | grep nvidia

225:		523   35608817		  0		  0   IO-APIC-level  ohci_hcd:usb3, ohci_hcd:usb5, ahci, nvidia

233:		 37	   1994	 259625	2462901   IO-APIC-level  ehci_hcd:usb6, nvidia

The NVIDIA driver unloads most of the modules it uses automagically when no user space client program (either the X11 server or something else like nvidia-smi or a CUDA app) is connected to the card. That is why the hardware “disappears” when nothing is running on the card.