I am asking this here because I find both the Programming Guide and the SDK examples rather unclear on the subject.
When device memory is full, any attempt to allocate memory on the device, such as cudaMalloc, returns 4 = CUDA_ERROR_DEINITIALIZED
What is the actual quota of the device memory usable through CUDA? Is there any way to query for the available device memory?
Example:
GeForce 9800 GT with 512 MB
Cuda Toolkit & SDK 2.1, driver 6.14.11.8208
under Windows XP SP2, Microsoft Visual C++ 2005
memory explicitly allocated on device ~ 70 MB
I find it hard to believe that:
[ the implicitly allocated memory + a 3D desktop app + usual apps nothing fancy ]
add up to the remainder of 512 MB of the graphics card.
Please share with me your views on the two matters.
cuMemGetInfo() is usable from the runtime API even though it’s a driver API call. Make sure to include cuda.h and link with libcuda/nvcuda.dll to get that.
Thank you very much for your reply. That is very useful. External Image
It turns out that 3D desktop app actually uses about 100 MB out of the device memory. Now I can investigate further on if the memory allocation in my app really goes the way i think it does. :ph34r:
In Visual Studio, you should just be able to go to the project properties > linker > input.
I have
listed in the Additional Dependencies field.
You may also need to set the ‘Additional Library Directories’ field under the linker > general tab. I have it set to
.
As a disclaimer, it’s usually not good to include any SDK stuff for release code because there is no guarantee it will always work as it currently does.
Also, if you are not using CUDA VS Wizard, I would suggest it.