I am using the unified memory option in my projects.
My question is how can i determine the maximum available bytes on my card.
For example i ran the same program (using many allocations of CudaMallocManaged)
on the GTX 970 i got an exception (out of memory) after reaching 1.8GB
on the GTX 750 Ti i got the same exception when 1.1GB were reached.
I know that i need to take in count the physical threads stacks and the kernel code and other kind of stuffs that the GPU is saving for it is own using but there is a method/tool to get good approximation?