I’m pretty much new to the CUDA and still trying to resolve some issues I have with it. For now those are memory questions. To be exact:
How do I get actual amount of the memory available for allocating?
I’m using driver version 185.85 with CUDA 2.2, hardware is 8400M GS card with 128 MB of dedicated memory.
I’m trying to allocate device memory for an array of floats of size, e.g. 4000 * 4000 * sizeof(float) = 64’000’000 bytes (64 MB):
error_here = cudaMalloc((void**) &d_A, size_A);
cout << cudaGetErrorString(error_here) << endl;[/codebox]
and it returns
[codebox]cudaErrorNotReady code is 1
out of memory
It is obvious that I can’t do that in this case because I don’t have enough device memory.
On the other hand, using code:
[codebox]unsigned int free, total;
res = cuMemGetInfo(&free, &total);
cout << “Free memory " << free/(1024 * 1024) << " Mbytes out of " << total/(1024 * 1024) << " Mbytes” << endl;[/codebox]
I get 128 MB out of 2943 MB. The value doesn’t change at all. Hm.
So does anyone know the way to know exact amount of the device memory available for allocation at any moment of time?