my lab just bought a new Tesla S1070 server. I’ve been testing it for a few days, and sometimes I got some strange results. I wonder if this may be caused by the fact that I run several programs each of which using GPU. Is there a way to be sure that each independant program runs using a different GPU of the TESLA S1070 ? I use the cublas_init function at the begining of my code, is that enough ?
Upgrade to CUDA 2.2 and use nvidia-smi to mark each GPU for one exclusive context. Then run your app without calling cudaSetDevice() and apps running at the same time will automatically be sent to different GPUs.
I haven’t tried this out yet myself, so I don’t know the exact syntax for nvidia-smi. Maybe I’ll play with it later today and post back.
I dont’ understand a thing at these instructions ! Is it just something I have to type on a command line ? Is it something I have to do for each program launch ? Is there some kind of a manual for nvidia-smi ?
since you guys are already working on Tesla, can you please tel me if it is possbile for Tesla architecture to execute MAD+MUL or MAD+SFU simultaneously?
I’ve the same problem, however related to using different devices by different apps under a windows environment. Is there a way to tweak the windows CUDA driver to set exclusive computing mode for different devices ?