Should tesla:managed ignore data clauses?

Hi,

I have an OpenACC code that I want to run on a large problem that does not fit in my current GPU.

I am trying to do this by compiling with -ta=tesla:managed to have the code
use uvm which I am told will swap as nessesary to run the code.

My code has unstructured data clauses and “presents” in it.

Does the managed flag make the compiler ignore my data management (data clauses) so that it will use uvm, or would I have to change the code to not use data clauses?

Does the managed flag make the compiler ignore my data management (data clauses) so that it will use uvm, or would I have to change the code to not use data clauses?

Yes, the data clauses are ignored for dynamic data when using CUDA Unified Memory. Though they are still needed for static data.

No need to remove them since you may need to turn them on later if you’re on a device without managed memory.

  • Mat

Thanks,

I tried running the large run and got:

malloc: cuMemMallocManaged returns error code 2
0: ALLOCATE: 1635454656 bytes requested; not enough memory

My understanding was that as long as the CPU has enough RAM, the managed memory should always work even in the GPU has less RAM.

Is this not correct?

If my GPU has 4GB RAM and my CPU has 32GB, what is the most I can allocate using managed memory?

Is this not correct?

Depends on which CUDA driver you have and how you’re compiling the code. Paging support was added in CUDA 8.0 so make sure you have the most current CUDA driver installed. Also, compile with “-ta=tesla:managed,cuda8.0”.

-Mat

My driver is:
Driver Version: 367.57

I tried the cuda8.0 flag and I still get
malloc: cuMemMallocManaged returns error code 2
0: ALLOCATE: 1635454656 bytes requested; not enough memory

I also noticed this came up:

free: cuMemFree returns error code 1
free: cuMemFree returns error code 1

Is there a CC I need?
I am using a GPU with:

CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 5.2