Memory allocation on a nVidia Tesla K20 GPU

Hello!

I have a Fortran code that has been GPU-accelerated using PGI Fortran accelerator. The code performs a number of calculations in an iterative method.
In the beginning of the code, different arrays that we use are allocated on the GPU and in each iteration in some new arrays are allocated and deallocated continuously. The maximum RAM memory of this GPU is 6GB.
My question is: Is there a way where we can see how much memory are we allocating on the GPU? What is of interest is only the maximum memory that was used during the iteration and not how the memory requirements of the code fluctuate.

Thank you very much for your time!.

Hi Iliasp,

We do have an undocumented runtime routine called “acc_bytesalloc” which returns the total number of bytes allocated on the device. However, it doesn’t show the maximum allocated at any given time. To use the routine either include “accel.h” in C, or “use accel_lib” in Fortran.

Another method is to watch the output from the “NVidia-smi -a” utility. It will should the current amount of used memory.

Hope this helps,
Mat

Hi Mat,

Thank you for your answer. I tried the method with “nvidia-smi -a” and it seems that it works fine. It gives me a good idea of how much the code allocates in each GPU so it works.
Thank you very much for your time!

Best regards,
Ilias