When I tested on Geforce 8500GT(this has 512MB Global memory),
I could allocate only 440MB memory on the device,
but i’ve thought that some portion of global memory was reserved for graphic processing.
Now, I’m working with Tesla C1060(4GB Global memory),
but I still could not allocate whole global memory.
Approximately, it provides 4032MB global memory.
I can’t understand why this happens.
Tesla does not have capability for video card.
Is there someone who tell me about this?
And, how much memory do i allocate at a time?
i can’t say exactly but i could allocate only 1024MB global memory at a time.
memory allocation larger than 1024MB at a time can’t be done with cudaErrorMemoryValueTooLarge return message.