Multi GPU question

Hi everyone,

my lab just bought a new Tesla S1070 server. I’ve been testing it for a few days, and sometimes I got some strange results. I wonder if this may be caused by the fact that I run several programs each of which using GPU. Is there a way to be sure that each independant program runs using a different GPU of the TESLA S1070 ? I use the cublas_init function at the begining of my code, is that enough ?

Thank you for your answers.

Regards,

Mathieu

Upgrade to CUDA 2.2 and use nvidia-smi to mark each GPU for one exclusive context. Then run your app without calling cudaSetDevice() and apps running at the same time will automatically be sent to different GPUs.

I haven’t tried this out yet myself, so I don’t know the exact syntax for nvidia-smi. Maybe I’ll play with it later today and post back.

It looks to me that “nvidia-smi” just gives the GPU temperature !

Install 2.2 and follow the instructions at:

[url=“http://forums.nvidia.com/index.php?showtopic=96638”]http://forums.nvidia.com/index.php?showtopic=96638[/url]

I dont’ understand a thing at these instructions ! Is it just something I have to type on a command line ? Is it something I have to do for each program launch ? Is there some kind of a manual for nvidia-smi ?

Hello,

since you guys are already working on Tesla, can you please tel me if it is possbile for Tesla architecture to execute MAD+MUL or MAD+SFU simultaneously?

Thanks

The Tesla S1070/C1060 uses the same GPU as the GTX 285, so yes.

Hi,

I’ve the same problem, however related to using different devices by different apps under a windows environment. Is there a way to tweak the windows CUDA driver to set exclusive computing mode for different devices ?

Thanks,

Julien