Max # gpus supported?

I was wondering if there is a maximum number of gpus that CUDA supports. For example, if I plug in 8* dual cards into the same server, is that going to function? I seem to recall either an 8 gpu or 16 gpu limit for some reason, but maybe I imagined it.

What I’m looking at:

http://www.cubixgpu.com/gpu-xpander-rackmount#rm_16_2

Reason for not using a cluster - tons and tons of inter-gpu communication that will be severely hurt by clustering

I’m pretty sure the current limit is 8 CUDA supported devices per host.
Bill

Take a look at this thread:

[url]https://devtalk.nvidia.com/default/topic/649542/18-gpus-in-a-single-rig-and-it-works/[/url]

wow!
That’s very impressive.

On an aside - what sort of bandwidth might you expect for a cudamemcpy between each half of a dual card gpu? As far as I can tell it shouldn’t need to transfer through the pci interface