CUDA with SLI enabled Code development in CUDA with SLI enabled

Hey, I’ve got a workstation with 3 Tesla C1060, which can be connected with SLI bridges and therefore work as a single device of increased computing power. However, all the information I read about SLI, is that it’s graphic related and specifically gaming related. The question I’d like to be answered is, if under a 3-way SLI ( I think that’s the name ) on all Teslas, will any application developed in CUDA benefit from the “clustering” of the devices?

I’ve seen lots of answers that SLI must be disabled in order for CUDA to see all devices, but I’m not interested in seeing three devices connected together, I’m interested in knowing if that one big device is available for CUDA and if applications will benefit from it.

I’m getting a all lot of trouble implementing multiGPU applications, specifically a non-task dividing, non-cooperative and worst of all kernel iterative LDPC decoders, hence the questioning, and the performance suffers heavily due to shared resources scheduling, ie the bus.

Thank you for your time.

andradx

CUDA and SLI are orthogonal. SLI can’t be used with CUDA at all (and there would be no realizable benefit for compute tasks in doing so). If you want multi-gpu, you are going to have to do it some other way.

Thanks avidday, in the meanwhile I tested it applying a two-way SLI and the results were null. You seem to be answering 90% of my questions here, thanks for the time and characters written