Performing Computation and Display on Same GPU?

Hi,

I’m a little unsure how to manage memory limits when the GPU memory is also being used for various applications running at the same time. If there are two GPU’s in the machine I can simply use one for display and one for computation, which allows me the entire GPU memory. How do I get round doing both of these on the same GPU? Ideally I want my application to use as much as is available, then overflow to RAM if necessary?

Thanks,
Dan

That’s not going to happen (unless you implement it yourself using ZeroCopy). The GPU is a completely separate device, with its own memory space. There’s no such thing as ‘swap space’ for a GPU.

Sorry, I totally didn’t mean that. I meant I wish to manage the explicit copying back and forth from RAM to GPU memory, but really need to know exactly how much space is available to me. I’m getting NAN errors when running out of GPU memory, so need to know how much is being used in display? With minimal display (ie no applications with heavy openGL renderings), these NAN errors go away.

Also, I presume there’s no way of using more than one application on the GPU at once, well until CC 2.0 at least?

cudaMemGetInfo() will give you the free and total number of bytes in device memory.

You can run more than one application at a time now, but context switching is not very efficient, so you don’t want to do that if you expect both programs to want a lot of GPU time.

That sounds like what I’m after. What would happen if I allocate all remaining gpu memory for an application, then the display needs more than it is currently using?

Regarding running more than one application at a time, I guess I could idle the second until the gpu becomes available again, that would be quite useful I think.

I believe that your application gets the boot.

That’s unfortunate. I’ll try and leave enough memory free to avoid this, but it would be nice to receive some kind of event so I could more cleanly deal with this situation from the application.

Glad to see others are trying some of the same stuff as I am and running into the same issues.

If your host is Linux - is there any mechanism to limit the amount of GPU memory used for the display so you know how much memory you have to run applications in?

Yes, this would be really useful. I’m on Linux too.