nvsciIPCImport/Export with nvsciBufObj share between with VMs?

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.10.0
DRIVE OS 6.0.8.1
[o] DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
[o] Linux
[o] QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
[o] DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
[o] 2.1.0
other

Host Machine Version
[o] native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Issue Description
nvsciIPC can share data between
1.different process @ same vm => INTER_PROCESS
2.different process @ different vm =>INTER_VM
3.different process @ different soc (via PCIE) =>INTER_CHIP

there are 3 question:

A.
In the above cases 1, 2, and 3, which will involve actual buffer copying?

B.
if using nvscibuf obj and nvscisync obj with nvsciIPCmport/export,
nvscibuf can be share between VM? or not?

Is it legal to perform buffer reconciliation across VMs? If it is legal, where will this buffer be located? Where is this buffer defined, and how can it be accessed by the CPUs/UMDs of each VM?

ps.
process 1 @ VM1 UMD : CUDA
process 2 @ VM2 UMD: DLA
process 3 @ VM3 UMD: CPU

C. following up on the previous question B , will sharing between VMs involve buffer copying?

or nvsciBuf and nvsciSync just only using at INTER_PROCESS/INTER_VM?? just using nvsciIPC to trans raw data between VMs?

Dear @newnew8787,
A. INTER_PROCESS and INTER_VM does not involve buffer copy. INTER_CHIP needs buffer copy.
B. Yes. NvSciBuf can be shared between VM. But only 2 VMs can share with zero copy memory(no buffer copy). It is legal and possible to perform buffer reconciliation across VMs.
Memory bank is created by hypervisor and mapped between 2 VM, when requested the memory is provided from this reserved memory bank.
C. If more than 2 VMs are used buffer copy is needed. NvSciIpc is not used to tranfer buffers data. It is only used to shared the buffer and grant access to the importer.

hi @SivaRamaKrishnaNV
thanks for your feedback.

May I ask more,

B. Yes. NvSciBuf can be shared between VM. But only 2 VMs can share with zero copy memory(no buffer copy). It is legal and possible to perform buffer reconciliation across VMs.
Memory bank is created by hypervisor and mapped between 2 VM, when requested the memory is provided from this reserved memory bank.

=>

if the case is below :
Process_A  @ VM_A: 1 producer (SIPL) and 2 consumer(DLA/CPU)
Process_B1 @ VM_B: 1 consumer and also producer (nvmedia_2D,rececive frame data from Process_A)
Process_B2 @ VM_B: 1 consumer (nvmedia_LDC,rececive frame data from Process_B_1)

that meams there are two p/c pair:
Process_A + Process_B1 => 1 producer + 3 consumer( 2 resides in the ProcessA and 1 in another vm )
Prcoess_B1 + Process_B2 => 1 producer + 1 consumer

after buffer reconciliation between them, those buffer are share and without buffer copy?
or which buffer is zero copy?

SIPL output and DLA input / CPU input and nvmedia_2D input is share buffer(zero copy)?
and
nvmedia_2D output and nvmedia_LDC input is share buffer (zero copy)?

+-------------------+       +-------------------+
|       VM_A        |       |       VM_B        |
|                   |       |                   |
|  +-------------+  |       |  +-------------+  |
|  |  Process A  |  |       |  |  Process B1 |  |
|  |             |  |       |  |             |  |
|  |  +-------+  |  |       |  |  +-------+  |  |
|  |  | DLA   |  |  |  |----|--|->| 2D    |  |  |
|  |  +-------+  |  |  |    |  |  +-------+  |  |
|  |     ^       |  |  |    |  |     |       |  |
|  |     |       |  |  |    |  +-------------+  |
|  |  +-------+  |  |  |    |        |          |
|  |  | SIPL  |--|--|--|    |  +-----|-------+  |
|  |  +-------+  |  |       |  |     |       |  |
|  |     |       |  |       |  |     v       |  |
|  |  +--v----+  |  |       |  |  +-------+  |  |
|  |  |  CPU  |  |  |       |  |  | LDC   |  |  |
|  |  +-------+  |  |       |  |  +-------+  |  |
|  +-------------+  |       |  |  Process B2 |  |
|                   |       |  +-------------+  |
+-------------------+       +-------------------+

C. If more than 2 VMs are used buffer copy is needed.
NvSciIpc is not used to tranfer buffers data. It is only used to shared the buffer and grant access to the importer.

=>

May I understand what you said is below case? (more than 2 VMs)
Process_A  @ VM_A => Process_B @ VM_B 
Process_A  @ VM_A => Process_C @ VM_C

Process A output buffer share between process_B and C , in this case , the buffer need copy?

If need copy,  is the copy automatically handled by nvstream?
or Producer need  dup the output buffer  , and do  twice reconcilement with Process_B and Process_C , => in other words , 2 pair of producer and consumer

=======================================================

One more question about nvsciStream,

if ONLY use(not with nvsciStream) nvsciBuf/nvsciSync/nvSciIPC with import / export also can achieve share the buffer in INTER_PROCESS and INTER_VM case?

nvSciStream as my understanding is that, buffer management and change the code flow to the event/switch base and handle some ipc import/export task? everything is still can be achieve only by nvsciBuf/nvsciSync/nvSciIPC with import/export?

hi
anyone can help this topic
thanks very much

Dear @newnew8787,
Just to clarify, Devzone release has single gos linux PCT . The use case requires to be run on guest OS(which runs on a VM) as you are using SIPL/2D/DLA/LDC APIs etc. So running processes across VM is not possible.
You may consider using NvStreams for this case.

HI @SivaRamaKrishnaNV , thanks for your feedback

The Nvidia website (NVIDIA DRIVE OS Linux SDK Developer Guide | NVIDIA Docs) provides a lots information on UMD communication, including examples of INTER_PROCESS, INTER_VM, and INTER_SOC communication. I just want to know the answer, regardless of whether only one VM is running on the device.

thanks so much

Dear @newnew8787,
NvSciStreams is not supported for INTER_VM(across VM)
when using NvSciBuf, the buffer can be shared across two VMs with out additional copy. But user has to take care of application workflow and synchronization as same buffer is shared across multiple processes(even across VMs)
But as clarified earlier, the use case you are looking(using 2D/LDC/SIPL etc) will run on single VM(guest OS) so INTER_VM is not applicable.