Get available memory on a Quadro 6000

I’ve been searching for a way to determine the amount of available memory on a Quadro 6000 but the only information I can find says to use the cuMemGetInfo function.

However, it returns a size_t which on my architecture is limited to a maximum of 2^32 bytes which works out to be only 4 GB.

Is there a way to get a full report on the amount of available memory that I am just missing?

For interoperability between host and device code, CUDA enforces identical pointer sizes for both, i.e. the bit-width of size_t and pointer is dictated by the host enviornmen. If you are on a 32-bit platform you will therefore be able to address up to 2^32 bytes of GPU memory from CUDA, even if more memory should be physically present. So cuMemGetInfo() is reporting the correct amount of memory.

You could check to see what memory size nvidia-smi reports, but I have no experience with running cards with > 4 GB of memory with a 32-bit OS, so can’t say what you should expect to see.

In general, I would strongly recommend switching to a 64-bit OS when using GPUs with >= 4GB of memory on board, so all pysically available memory can be used.