I’m new to CUDA so this may be quite basic questions.
I would like to use the Runtime API for my programming. However, I cannot find a way to determine the current amount of memory available on the GPU. It seems that the Driver API has the function which I’m looking for called cuMemGetInfo(); - but since you are only able to use either the Driver or Runtime API in a program I am unable to use this function at all (or at least I think that is the case).
So is there a way of getting the unused memory info from the card using the Runtime API?
How is the problem of available gpu memory usually addressed? (is it just assumed that you have enough memory and after allocation you deal with the allocation error afterwards?)
Thanks for any help you have out there!!!