I’m recently working on project using MIG(Multi Instance GPU). To achieve my goal, i need multiple CI(Compute Instance) to access same cuda Tensor. For example, if process1 creates cuda tensor, then process2 should be able to access it.
As far as know, this can be done since CIs(Compute Instances) share memory of parent GI.
I already know this can be done by using torch.multiprocessing.Queue since it can pass the cuda ipc handle to different processes. Is there any different way to communicate cuda tensor with different processes?
Please tell me if something i understand is wrong too.
Thank u in advance