Linking more than 4 GPU's with multiple PCs for rendering GPU Parallel rendering across PC's

I am new to CUDA GPU computing and I’m not sure about it’s capabilities. Ironically, I don’t need the general purpose exposure provided by the new API. My task is to render a complex scene with output to multiple displays. I would like to divide the rendering between several GPUs. Namely, I would like to link around 20 Nvidia cards for processing. I don’t think I can make use of SLI or NVidia Quadra plex for this particular task. I’m guessing that minute differences in output would be obvious to the eye if I tried to split up the scene geometry manually between PCs. So, I thought I would investigate CUDA.

Does anybody know if it would be well suited to my task?

Thanks,

Dale Spencer

You’d better be using conventional rendering apis (like directx or opengl). Unless you use some fancy rendering schemes which can’t be mapped to standard rendering apis cuda seems like a wrong direction for you.
If I understood your problem correctly - all you need to do is to tile your viewport into pieces and perform standard rendering on to each of them using a different pc or gpu for that and in the end stitch all that into a big image. I can’t see where you could apply cuda here.

Thanks for the reply sergeyn :).

That is essentially what I want to do, but I can’t maintain the entire scene geometry on every system. It’s too much data for one PC. I would have to do some pre-clipping and pipe the vertices across PCs as the meta-viewport changes. My main concern about doing this was inherent precision problems with floating point calculations. I’m concerned that error in pre-clipping and the rendering pipeline will build up to create nasty visual artifacts as I stitch them together. I will probably try this approach and see what type of results I get.