CUDA on a non-dedicated GPU

Hi everybody,

I’m using NVIDIA’s GeForce 8xxx series memory cards for CUDA programming, running on PCs with only one GPU (from the aforementioned series).

My question is, is it possible to preallocate memory for CUDA computation such that this block of memory will be untouchable by other GPU demanding software? (This is important to me because I’m looking for a 100% guarentee that the amount of free memory I have at the beginning of the function is the same as at the end of the function. This is not the case if the user starts multitasking and opens GPU demanding software in the background).

And another related question: If I use two GPUs, can I define one of them as a GPU used solely for CUDA computation where the other one runs everything else?

Thank you…

Y.

Run on a linux machine without X installed. That way the windowing system can’t play with the GPU memory at all. However, you could still run a 2nd CUDA app at the same time that allocates memory and breaks your guarantee: so you need some way of ensuring that only one CUDA application is running at a time. One way to do this is to run jobs through a standard job queueing system (I use SGE).

On launch, your app should call cudaSetDevice to choose which device to run on.