Share Cuda memory between different system processes

Hi!
I have some PyTorch tensors in a single script and want to share them with each other scripts.
How I can do that?

I have tensors in Cuda memory ( in a backbone.py) and I think the best solution was to get something like an address in Cuda memory that I could turn to to get (in a head.py) these tensors.

Any ideas?
PS. Looks like task for triton inference server

Hi @kuskov.stanislav
This question might be better suited for CUDA Programming and Performance - NVIDIA Developer Forums forum branch. I have moved it there.

1 Like

CUDA IPC mechanism allows for sharing of device memory between processes. There are CUDA sample codes that demonstrate it. I won’t be able to give you a roadmap for whatever you are trying to do in pytorch. However a simple google search of “pytorch cuda ipc” turned up articles like this which may be of interest.

1 Like

Thank You for this link! It’s looks same of my task. I will try and write about the results.