how to get the available size of memory?

Hi, is there any build in function to get available size of GPU memory in runtime(global memory, or shared memory if possible)? i didn’t find these kind of function in manual, or is there any alternative way?


The SDK example deviceQuery shows how to use it ( it is also in the manual)

In the NVIDIA CUDA SDK, there is an example project called “deviceQuery”. In here, there is a Cuda Util function called ‘totalGlobalMem’ which will return the global memory, and there’s functions for shared memory / constant memory etc.

Best to look at the example.

I think the questions was how do we get CURRENT available memory, not total device memory. If I allocated 256MB out of 512MB, how do I get that the remaining free memory is 256MB?

You can use cuMemGetInfo. This is one of the few functions from the driver API that you can mix with the high-level API.

See my post in for an example.

Nevermind :-)

Thank u! yes actually I mean the current available memory space.

I tried your code in windows, also i tried cuMemGetInfo, but the output is wield

^^^^ Free : 4333584 bytes (4232 KB) (4 MB)

^^^^ Total: 4198762 bytes (4100 KB) (4 MB)

and i’m not clear what does CUcontext do in your code