I’m confused a little bit about context memory in “Cuda_C_programming_guide”, it says,
" A CUDA context is analogous to a CPU process. All resources and actions performed within
the driver API are encapsulated inside a CUDA context, and the system automatically cleans
up these resources when the context is destroyed. Besides objects such as modules and
texture or surface references, each context has its own distinct address space. As a result,
CUdeviceptr values from different contexts reference different memory locations."
Does that mean different context can’t access CUdeviceptr which is allocated by another context? Is memory address space per context?
Thanks in advance!