How to check available global memory size?

We are trying to make two processes share on GPU, i.e. run the two processes at the same time and both use same GPU for computation. The total time is faster than running them sequentially. Everything looks fine except some memory issue. That is, when the total global memory is more than the total global memory on the card, the first process can get enough memory, but the second one will report “out of memory” error when it try to requests the same amount memory, since the total amount is larger than the global memory available. We can dynamically removed some computing to CPU so make the second process use less memory. To do this, I hope to somehow check what’s available global memory left on the card.
It seems the API cudaGetDeviceProperties can only check the static total global memory. My question is, is there a function to check the “available” global memory?

Thanks a lot.

cuMemGetInfo, please check it in CudaReferenceManual.pdf


One cannot share GPU pointers between two contexts… GPU pointers are tied to a context… If you try to share pointers between context, you wil get a segfault (Unspecified launch error)…

Be aware…

Oh, yes, this is what I’m looking for. Thanks!!!

Thanks, Sarnath,

I’m not trying to share GPU pointers between contexts. Just want to figure out how to allocate right size of memory for each context…