A quick calculation for GPU RAM


I just had a simple calculation to ask about. On the GPU’s global memory, if I have a matrix stored there that is H(2000,2000), does that mean that if:

  • 20002000 = 4000000 cells that are fully populated
    and if they are double precision, therefore 4.0
    10^6 * 8bytes?
    if so therefore it equals 410^68= 32,000,000 bytes

is that way of calculation right?

I ask this because when I have 1gb RAM on my GPU and my device subroutines crash the GPU when I assign them an array that is H(3000,3000) of double precision. Could it be because of memory allocation?

thank you for your interest



I actually found the for the previous culprit. I am just interested in another problem that has arisen.

I get the error: LINK : fatal error LNK1248: image size (…) exceeds maximum allowable size (80000000)

Is there a way that I can allow for larger arrays and exceeding that limit? I tried the flag -Mlarge_arrays and it made no difference. I am in dire need for allowing for larger sizes, could you please guide me on what to do?


Hi Ahmed,

Are these static arrays? If so, you need to use “-mcmodel=medium” when the static size of the data is >2GB.

Note that Windows does not support the medium memory model, so if you’re using Windows, change your arrays to be allocatable and compile with “-Mlarge_arrays”.

Hope this helps,