Is Memory address space per context?

I’m confused a little bit about context memory in “Cuda_C_programming_guide”, it says,
" A CUDA context is analogous to a CPU process. All resources and actions performed within
the driver API are encapsulated inside a CUDA context, and the system automatically cleans
up these resources when the context is destroyed. Besides objects such as modules and
texture or surface references, each context has its own distinct address space. As a result,
CUdeviceptr values from different contexts reference different memory locations."
Does that mean different context can’t access CUdeviceptr which is allocated by another context? Is memory address space per context?

Thanks in advance!


Yes, that is the case, as far as I know.

Different contexts may recycle the same memory address space, but the mapping to hardware memory unique to each context. So the same address in two address spaces will point to different things, or may not exist in the other context.

Thanks, cbuchner1!