Remove data from the GPU

Hello,

I have realized that when I do two successive and independent computations of my program in a GPU, the second computation begins where the results of the first one have ended. That includes that sometimes the next computation can’t be done because the GPU is “Out of memory”.

I have done three things in order to avoid it, which in fact seems not to be enough:

  1. I initialize all the variables of the problem in the CPU after allocating them.
  2. After that I do an “acc data copyin” or “acc data create” of those variables needed on the GPU.
  3. At the end of the program I do the corresponding “acc end data” and I have added an “acc exit data delete” of the variables.

Which is the best way to completely remove the data from the GPU to clean it up for the next computation?

Thank you,

Martí

Hi Marti,

If the problem is with the device allocation of the variables, it should occur at the point you create the data (i.e the enter data directive) and not when you enter the second compute region.

Do you use “private”? Privatized variables get allocated in global memory with each thread getting it’s own copy. It could be that the issue is that you’re using too much memory for these private variables.

  • Mat

Hi Mat,

Thank you for the comments. I don’t use private.

Is there anyway to clean up the GPU memory at the beginning or at the end of the program? I’ve noticed that when I turn off/on my computer, the results are correct again. I assume that by doing this the GPU is cleaned…

Thanks,

Martí

Hi Marti,

Once your program exits and the GPU context is destroyed, there shouldn’t be any memory allocated on the device.

The behavior you’re describing seem more likely to be a case where you have an uninitialized device variable that’s zero after reboot. However, as more is run on the GPU, the more likely that the data gets a garbage value.

Can you run the CUDA “cuda-memcheck” utility with your binary? It might help give clues in what’s wrong.

  • Mat