Can I run a cuda application while using that same video card as my primary display device?
I am new to cuda and trying to decide if purchasing 2 cards would make my life easier or not matter at all.
Thanks
Can I run a cuda application while using that same video card as my primary display device?
I am new to cuda and trying to decide if purchasing 2 cards would make my life easier or not matter at all.
Thanks
Yes you can use one card. It doesn’t really matter a whole lot. In some instances you’re better of with two cards, but don’t worry.
I for one would sometimes like to have a nice, cheap video-only card so my badly programmed CUDA stuff doesn’t garble up the display, but it’s a detail for me.
thanks for the reply
ok, so am I right in thinking that my screwed up cuda app could do weird things to my desktop display, but my desktop display will not mess with i.e. overwrite memory for the cuda app? Also, will the display impact performance of the cuda app? I guess it comes down to: does the cuda app greedily hog the card while it is running?
If you’re interested in the best performance, you should dedicate a GPU to a single task, rather than splitting it.
Under normal circumstances, the display portion of the card will never touch your CUDA allocated memory. That being said, there is a warning in the manual that display resolution changes that require more free memory than is available could potentially touch your CUDA allocated memory.
CUDA does hog the GPU. If you run kernels that execute for ~1s, you will have 1s updates between mouse movements. I often run millions of ~5ms kernels and observe very little desktop display lag. There is no noticeable performance hit to the CUDA app (<1%) as long as the display is not being updated. If I start dragging a window around causing a lot of display updates, the CUDA app suffers by ~50 %.
OK so to expand on this, say I want to run a CUDA application on one (or two or three) GPU’s in my system, then use the remaining card as my primary display, for games or whatever else.
Is it possible when coding to assign to specific GPU’s without previously having the information while coding (if my friend used the program instead of myself).
I am aware of the non-SLI issue which is fine, but in addition, theoretically if the above were true, I could just as well used 2 in SLI and two seperate and dedicate the other two for CUDA while SLI for gaming, correct?
It is up to you how you write your problem. I just have a command line argument that is passed to cudaSetDevice().
I don’t think this is possible. When I had my system setup with a 9800 GX2 and a 8800 GTS (G92), SLI did not show up as an option in the NVIDIA control panel.
Oh nice thanks that’s a good idea.
And was it enabled previously? I am ideally going to get 4 8800GT’s for a small CUDA cruncher, 2 of which will be SLI and the other two would be CUDA. But I guess I’ll just have to research this more myself. Thanks though.