I’m using NVIDIA’s GeForce 8xxx series memory cards for CUDA programming, running on PCs with only one GPU (from the aforementioned series).
My question is, is it possible to preallocate memory for CUDA computation such that this block of memory will be untouchable by other GPU demanding software? (This is important to me because I’m looking for a 100% guarentee that the amount of free memory I have at the beginning of the function is the same as at the end of the function. This is not the case if the user starts multitasking and opens GPU demanding software in the background).
And another related question: If I use two GPUs, can I define one of them as a GPU used solely for CUDA computation where the other one runs everything else?