I was wondering if an application could be developed that uses Cuda to do some complex calculations for the same application running on another card on an SLI motherboard.
Is it possible to use cuda on card0 to do calculations for an application running on card1 in SLI compatability mode?
For example. An application that renders real terrain of the Earth from SRTM height files and uses Cuda to calculate the geometry.
cartrite
being a beginner in CUDA, as far as I can tell, the application isnt run on your cards.
The cpu runs the application, and based on how you programmed your application, you tell the CPU to execute threads on your cards. Once you tell the CPU to execute the threads, it leaves the CPU to be able to do more work. If you wish for the threads to finish before continuing work on the CPU then you can call the function cudaThreadSynchronize(). Now… i have no idea whether you can tell CUDA to treat your graphics cards as one unit or two seperate.
hope this helps
I think maybe you are misunderstanding what I meant to ask. Yes the cpu runs the application. Here is the problem. The goal of the application is to render real terrain. This can be a very complex task for the cpu which takes a lot of cpu time to calculate the geometry from a flat file into a spherical body which varies the radius according to the data value in the height map. So I wanted to get cuda to do this because I’m thinking it can do these calculations a lot faster. But the problem is, if the gpu is calculating the geometry, then it can’t be rendering the scene.
What I’m asking is if one card can render the scene and the other card use cuda to calculate the geometry while the application on the cpu decides which files to get next, load them into memory, etc.
But from what I read about the inner workings of SLI, SLI modes Split frame rendering and Alternate frame rendering are using the same memory and sharing the work load of rendering the scene. Maybe I’m wrong, but this leads me to believe that Cuda could not be used at that time in any of these modes or it would degrade performance for displaying the scene. In Compatibility mode, one card is idle so I was wondering if the idle card could be used to calculate the geometry with Cuda for the other card that is rendering the scene.
cartrite
you have to disable SLI for CUDA to see more than 1 device, it does not work with SLI
You can have 1 card do visualization, the other card can do the calculations
You can also have one card do both visualization and calculations.
To disable SLI, that is a hardware setup step on the motherboard(right?), at least on my motherboard it is. So asking potential users to do that step every time they wanted to run this application would not be an option. It sounds like the best option here is to use SLI and have both cards share both calculations and visualization. Wouldn’t that bring performance to a crawl though? :blink: I thought trying to write an application like my suggestion would get ugly. Maybe I’ll try something simpler and see how it works.
Thanks to both of you for your replys.
cartrite
No, CUDA does not work with SLI enabled. When your output is not just visualization it becomes practically impossible to generically distribute calculations on a driver level. With CUDA you can distribute your calculations over more than one card, but you have to do it yourself.
As far as I know you can disable SLI in the driver, both in windows and in Linux
Thanks again, I didn’t know that SLI could be disabled that way. On my motherboard, there is a chip between the 2 SLI ports. I thought that had to be flipped or something to enable the SLI ports for 2 cards. Then I thought SLI would always be enabled. Right now I only got 1 card so I never tried this. This is interesting though.
So SLI would have to be disabled to use CUDA. OK.
cartrite
I don’t think you know what SLI Compatibility mode is. I don’t either.
cartrite, the flipping of the chip is to change the configuration of the PCI-E bus. That wouldn’t just disable sli, it’d disable the 2nd slot. Compatibility mode might be the solution, I have no idea.
Also, what exactly happens when running CUDA with SLI enabled? I imagine it still works, just runs on one card (as chosen by the driver). I’m assuming this, because that’s how it would work if CUDA were to gain mass adoption. If so, then you don’t have to disable SLI and performance should stay good.