persistent memory

Is there a way to keep CUDA memory persistent?

In this snippet the global deviceData pointer does not retain the contents on subsequent calls to cuProcess.

int* deviceData;
bool firstStart = true;

extern “C” void cuProcess(int* someInitData, int* someOtherData, size)
{
//copy someHostData to cuda
if(firstStart)
{
firstStart = false;
cudaMalloc((void**)&deviceData, sizesizeof(int));
cudaMemcpyAsync(deviceData, someInitData, size
sizeof(int), cudaMemcpyHostToDevice);
}

//do some cuda stuff
}

It might, if you declare firstStart and deviceData as static…

CUDA global memory allocations have the lifetime of the context in which they are created. If you don’t destroy the context, the memory remains allocated.

Thanks for the quick response. I have tried declaring as static, but the problem remains. I would have expected the memory to remain allocated until cudaFree is called, but it is not. I can confirm that firstStart is being called only once, and that the memory that the contents of deviceData is intact and correct - it’s just not visible from within the Kernel after the first call.

In the snippet they seems as global variable, which should be ok. No need to static here or I miss a point…

It should be.

Can you post a full runnable test case that shows it’s not ?

After much playing about I got the issue resolved. Even though you would expect the memory to be persistent, and it is within the context of the .cu code, the kernel fails after the second call (sometimes it will go a little longer, but it will fail). Checking the contents of the memory prior to calling the kernel confirms the data is still there, but the kernel fails to see this data.

My resolution was to allocate the host memory as cudaHostAllocMapped|cudaHostAllocPortable and retrieve it through cudaHostGetDevicePointer.