Sharing CUDA memory between processes

As stated in CUDA docs, IPC functions do not work on Tegra platforms, including, of course, Jetsons.

Is there any other option how to create CUDA memory buffer that is shared between two separate processes?

Hi,

Could you share more detail about your use case.

Do you want to share the GPU buffer while both processes still alive?
Or it is possible that one of the process will terminate earlier before the other access the buffer?

Thanks.

Yes, two processes are still alive. The use case is like one process is a “producer”, and second is a “consumer”, so the first process fills shared CUDA buffer and signals other process that buffer is ready, and after it second process reads it.

E.g. it’s just a zero-copying issue, two processes need to communicate large data, but copying over conventional IPC’s is rather expensive thing.