Device memory separation with CUDA


I am interesting on how CUDA separates device memory between processes of operation system?
If i run my code onto device, can it access
a) Primary surface
B) Device memory, allocated by another process of operating system.

Second question is - what privileges need my code to be run onto device? Can the guest account(in windows) initialize CUDA device, write to its memory and run threads on it? Is there any editable ACLs, that determine the access restrictions to the device?