Access unified memory from a different process

Windows 10, cuda 11.7, capability 6.1

What if I call cudaMallocManaged in process A, then send the address to process B (via, say, boost ipc)? Will B be able to access the unified memory allocated by A?
(B is an ordinary windows process, compiled with msvc and not aware of cuda).

Thank you for your replies.

I’m not aware of any limitations here. On windows, make sure that you have done a cudaDeviceSynchronize() after any kernel call(s), before attempting to access unified memory.

Thank you for your reply.

I tried the situation described in my original post. Process A: cudaMallocManaged; no kernel working on the allocated memory, but just write a number in the first allocated bytes; send the address to process B via message_queue; cout the address on both sides to make sure it really remains unaltered while traveling down the queue. When B tries to dereference to read the number, it hangs with no error message.

Sorry, I miscommunicated. You cannot send an address from process A to process B and expect to use it in process B. You can’t do that with an ordinary allocation and you can’t do it with a managed allocation.

As far as I know there are no restrictions from using managed memory with a legitimate IPC mechanism. What you have described is not a legitimate IPC mechanism.

Thank you.
By “legitimate IPC mechanism” I think you mean CUDA IPC, true?
In this case, both processes must use CUDA IPC.
What I would like to do is to access the allocated managed memory from a non-CUDA process. No way of doing that?

CUDA IPC is for sharing of device memory allocations.

The IPC you are talking about here is host IPC. A legitimate host IPC mechanism is one that establishes inter-process-communication between two host processes. Simply passing a pointer obtained in one process to another process is not how IPC works.

And yes, now that I think about it, it might be difficult to share the managed memory via host IPC.