Multiple GPU

Hi all,
I have 2 GPUs (gtx280) installed on my computer, I want to allocate one of them just for display (as a usual video card) and the other one for CUDA programming and computations, does anybody know how I can do that?

alot of people say you have to have an extra monitor hooked up to card #2 but this is false.

if you have the TV-OUT cord that comes with alot of these cards just plug it in the TV-OUT port on back of your cuda card…the other end can be free it dont have to be plugged into anything.

then right click your desktop and select personalize>display settings> and select one of the 4 screens you see to extend the desktop.

on my computer with 2 8800gts screens 1 and 3 goto primary card and screens 2 and 4 goto the 2nd card

i usually select screen 4 and select extend onto this device.

Thanks for your reply,
So, if I connect the monitor to GPU#0, the GPU#1 is completely free, and then by using the cudaSetDevice(1) I can run my CUDA codes on GPU#1 and nothing would be run on GPU#0? am I right?

you forgot to mention the TV-OUT cable that goes into the TV-OUT port on GPU#1 as i previously stated.

oops, I forgot…but, I can’t understand, why I need that port for the Cuda-GPU (i.e. GPU#1), b/c it’s not going to be used for display anytime. :blink:

No, no, no, a thousand times no. What g000fy said is completely wrong.

You do not have to extend the desktop. Specifically do not extend the desktop. If you extend the desktop the watchdog timer (5s kernel runtime limitation) takes effect.

It will show up as two CUDA devices; one is display, one is not. There’s not a really good way to know which is which (it will be one way in Linux and another in Windows), so write a infinite looping kernel and run it on each device. The one that doesn’t return an unspecified launch failure 5-10s after starting it is the non-display card. (In Windows, I think it should be device 1 is compute and vice-versa for Linux.)

I didn’t connect anything to the GPU#1, and the monitor is connected to GPU#0…So, if at the beginnig of my code I put cudaSetDevice(1), can’t I suppose that GPU#1 is fully doing the CUDA computations and GPU#0 is doing the display duties?

yes, that is correct.

On XP this’ll work, but on Vista don’t you need both to be display cards? At least that’s what’s been said here; I run Vista but only 1 card. Also, how do you stop an infinitely looping kernel? (Again, for me it just times out.)

crap i really thought it was that when you were not using SLI enabled.

and to my understanding with cuda using multiple GPU`s you have to have SLI disabled.

with physx in vista this is what you have to do.

with windows XP and LINUX you do not have to do this.

are you sure tmurray that you are correct about this if he is using vista?

Look at where I work. Vista is special because at the moment you can’t run CUDA on non-display cards.

well that is what i said and you said i was wrong.

for vista you have to have a device hooked up to BOTH video cards…that is what i stated.

Thanks for your help.
My OS is XP64bit; so, I don’t have to make both GPUs for display.

And something else…Each GTX280 has 2 dual-link DVI and 1 analog HDTV-out ports. The monitor is connected to one DVI port (in GPU#0), but when I connect the monitor to the other DVI port (in GPU#0), or to the other GPU’s (i.e. GPU#1) DVI ports, monitor turns off. If this is normal?

oh you are running XP…well than that does make a difference.

do what tmurray said.

he works for nvidia so you know hes probably got this stuff down pat.

Please see also the following information:

Yes, I realized that I have this problem too. The second card which is not connected to monitor takes more computation time for CUDA than the GPU#0 which is supporting monitor!

quote=omon,Oct 3 2008, 07:27 PM]

Please see also the following information:



in vista

just use the tv-out cable…its easier if you have one already.