Max number of GPUs?

Does anyone know if there is a hard limit in Windows XP Pro x64 for GPU enumeration? i.e. What’s the absolute and/or theoretical maximum number of GPUs that the operating system will support if motherboard hardware exists?

I am thinking of assembling one computer with 6 GTX295 cards in it for a total of 12 GPUs. Will Windows XP Pro x64 or Windows Vista support that many GPUs?

I am aware of the many issues associated with building such a machine, right now I just need to know if Windows will enumerate that many…


Odds of it working are approximately zero because of system BIOS issues. Eight GPUs are officially supported with R180 drivers in Linux, but I don’t know what the max is for Windows.

I have 8 GPUs working in Windows but I’m concerned that it’s the maximum number.

Out of sick curiosity, how do you plan to get 6 cards in one computer?

This plus these and a custom machined riser card holder. Ideally the whole shebang would be water cooled and overclocked to nearly 22 TeraFLOPS in one (big) box.

Would get in the neighborhood of 108,000 ppd in folding at home from the one machine.


The official answer, as far as I know, is that right now we support (that is, with every driver release we have, these configurations are tested) eight on Linux and I’m not 100% sure (at least four) on WinXP. Past eight and you’ve entered crazy configuration land (population: you), but I really do think you’ll hit SBIOS limitations before you hit driver problems. I know we have a test machine set up for many-GPU experiments and with 12 GPUs we ran into numerous system BIOS issues that we had to fix before we could even boot to a console.

So unless you can try it first and make sure the BIOS works before you buy anything, I would not risk it. Even then, you are in spooky driver territory, and I won’t promise that it works.

This is where virtualization could come handy. XEN (and Hyper-V too I guess) already supports VT-D hardware.

Many vendors are moving to Virtualized compute centers. I know Intel is backing Virtualization because there is no need to create or port multi-threaded apps to leverage multi-core hardware. But being under Virtualization Umbrella can benefit CUDA as well.

This will alleviate driver issues… but main BIOS (NOT the Bios under the virtual PC) issues might still remain

Oh that makes me want to try it all the more. I have 15 GTX295’s now, a $370.00 motherboard isn’t going to slow me down.

I’ll send Asus an email (that probably won’t get a response) and ask, but the fact that the board has 6 PCIe slots implies that the BIOS can handle the devices. Which means that problems beyond that would be OS/Driver stuff that might be resolvable through virtualization.



Why not just put them in separate boxes? I would think that putting all the cards in one box would actually be slower due to hitting other system bottlenecks (CPU/Memory/HDD/internal busses).

Also, for extra ridculousness points, get this (…?product_id=372) to cool the whole thing, and overclock all the cards for extra speed.

I have 15 GTX295’s spread out in different machines already that I dedicated to Folding@Home 24/7. The thought was to make a technology demo, but after spending some time on the Asus forums it appears to be a dead end. Someone there posted:

My ridiculousness points are already fairly high, I wanted something truly outrageous. (


I don’t think that’s true. NF200 is a PCIe switch + some special sauce for NV GPUs, but as far as I know there’s nothing limiting PCIe links from NF200 from being used with non-NV GPUs.