Memory used by Cuda Is is protected?


just a short question. when i am using the GPU for a CUDA-APP and also as my Card to view the desktop an running OpenGL or whatever.
how is the memory copied from Host to Device from my CUDA-APP protected, against other application using the GPU?

Thank you very much!


From my understanding the GPU is not “protected”, or in other words it is not dedicated to your application alone. Same with memory, any free memory can belong to an application (keep in mind, free memory, not allocated).

Just try running a bunch of the sample CUDA programs, they obviously can all run at the same time on a single GPU.

Sorry if this doesn’t answer your question.

Ok. I dont know :P

when i am using cudaMalloc, the allocated memory space is “protected” against other applications uisng the GPU Memory?

so my CUDA Application cant get corrupted, because something is writing to my memory.


sorry for the bad english.
and thanks again.

It is somewhat unclear how the memory protection works. Statements from NVIDIA employees in the past (like tmurray) have suggested that the device does use virtual memory addressing and can protect against corruption between processes. However, people have seen bad writes corrupt the display, crash the computer, etc. I believe these were due to driver bugs in dealing with crashed GPU kernels rather than a fundamental architecture problem, but I don’t know. Someone should build a torture test to see how things work with the latest driver.

I think memory writing is well known.

Just look at this application:

Inside there is a memory scrubber, which populate it with some data. But if you cudaMalloc() the memory from another application (or another process), but don’t initialize it, then you can easily see that all the memory have been filled with the previous session’s data ===> correct me if wrong…I may be wrong.

Just guessing from the way it is written. Thanks to someone in this forum who pointed me to this application :-).

That’s a related, but slightly different issue. You can see the contents of memory from a prior process (which is a problem this wrapper solves), but I think the question was whether two active contexts could corrupt each other’s memory.

According to:

it seemed that Windows WDDM (which comes with a GPU scheduler in the CPU’s running kernel) does have something sort of a GPU context interrupt. But one GPU can only run one thread (and thus context) at a time:

though they can be interrupted at any time:

And the specifics for context switching for Nvidia GPU is here:

and coupled with the ability of inter-process able to see each other’s GPU memory content…all these seemed remotely to point to “YES”…possibly they can…interrupt each other GPU processing and then touching the data? I may be totally WRONG :-(.