Multi-GPUs of same type What is "type" actually?

The CUDA 2.0 manual says that CUDA will work on multi-GPUs only if they are of the same type.

What is “type” here? Does it mean all of them have to be “G80” or “G90”. OR does it mean that they all have to be “8800GTX” or “8800GTS” etc…?

Also, I dont understand the cause for this limitation. Can some1 throw some light here?

Thank you,

Best Regards,

It says that it is only guaranteed to work if they are the same type. My guess would be they are extremely careful because it might change in the future (when scanning the guide in noticed they state that 32 bit integer multiplication will be faster than 24 bit multiplication in future devices, but that is not yet the case for GT200)

In testing a GT200, I’ve tried systems with a G92 card and a GT200 card in them and everything worked fine. Perhaps the manual is being conservative. After all, if you compile your code for sm13 and then try to run it on a system with GT200 + G92, you’ll get weird error messages.

NVIDIA drops support for old devices out of their drivers periodically. CUDA drivers, for example, don’t support GeForce 4s (and derivatives like GeForce 5200). At some point in the future, probably 8-series GPUs will also become obsolete. Maybe that’s what NVIDIA is talking about?

Or maybe it’s a cop-out in case the drivers have a bug when dealing with hardware from different generations.

Or I like MrAnderson’s idea about CM versions. You might see irreparable rifts in the future when CM2.0 comes out and the new cards aren’t backwards-compatible. (I’m guessing now, you can just revert to the lowest common denominator.)

Or it’s just a cop-out.

Thanks for your reply guys.

I think I understand a bit now.

It would make sense if some1 from NVIDIA could comment on it.

Best Regards,

In simple words, one can safely use the devices which have the same ‘compute capability’ as well as the same ‘number of multi-processors’.

See the Appendix A of CUDA programming guide 2.0.

All the devices which are in the same row belong to the ‘same type’ category you have mentioned.

But, still I’m not sure as to whether the devices in the same row (in Appendix A) with possibly different clock frequencies can be used together or not :(

Teju, Thanks for the input! Hmm… An official clarification from NVIDIA would help a lot of people out there – who r working on Multi-GPU stuff…

The essence is: you dont want to develop something only to find that it wont work after a few CUDA revisions or few hardware revisions :-(

Best Regards,

Or… you can develop code that’ll work across GPUs of different sizes/types, and buy uniform hardware down the line if anything forces you to.

CUDA will make sure that your code will work across various hardware. but multi-GPU apps could fail at arbitrary configurations of multi-gpus. This was my point.