Inter chip communication on NVIDIA DRIVE™ Software 10.0 (Linux)

Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”):
Software Version
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
Linux

Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)

SDK Manager Version
1.5.1.7815

Host Machine Version
native Ubuntu 18.04

I have received a drive AGX Xavier DevKit pre-flashed with Linux
I am trying to establish communication between enpoints using NvSciIPC.
Interprocess communication worked fine with already provided channels in nvsciipc.cfg file
Now trying to establish communication between Xavier A and Xavier B using NvSciIPC using same piece of code just by changing the endpoints name with already provided endpoints in nvsciipc.cfg file .

The code is cross compiled on host machine whereas while running on board it gives an error saying “libnvscic2c.so is not available”.

Please find attached the log for reference:

Read: Before Init
Read: NvSciIpcInit success
!err[L:62]:nvsciipc_c2c_open_library: libnvscic2c.so is not available
!err[L:225]:nvsciipc_c2c_open_endpoint: C2C backend is not supported
!err[L:286]:NvSciIpcOpenEndpoint: Failed to open nvsciipc [INTER_CHIP] endpoint: 17
Read: NvSciIpcOpenEndpoint failed nvscic2c_3 :NvSicError = 0x11 (Endpoint creation not supported)
Error in reading from endpoint

As per the log we are getting not supported error while creating an enpoint for inter-chip communication. So is inter-chip communication supported through NvSciIPC as per the doc? If yes can you please point out the doc which details out the inter-chip communication.

Also, please can it be confirmed if the Xavier expected to have “libnvscic2c.so” available after installation or anything extra needs to be done for the same.

Dear @aniruddha.nadgouda,
C2C communication is not supported on Xavier.

@SivaRamaKrishnaNV Thank you for the very prompt reply, providing the clarification.
We have already established socket communication ( Using driveWorks socket API ) to transfer the data between Xavier A and B for now between our two features running on each Xavier. The basic system works fine.
Can you please confirm if this is the best way to establish communication between the 2 Xaviers (Xavier A and Xavier B) else is there any other recommended method to exchange data between them. If there is any alternative method can you please specify where we can refer the documentation for it.
Note: There is a requirement 10MB(total size to be exchanged) of data to be exchanged between Xavier A and Xavier B per frame in 100ms.

Dear @aniruddha.nadgouda,
Currently, As you used, DW socket API is the way to communicate across two xaviers. Do you see any issue?

@SivaRamaKrishnaNV There are no issues observed, just wanted to confirm if this is the best way to establish communication.
Thanks for the confirmation.

1 Like

Hi @SivaRamaKrishnaNV : Thanks for your reply I could successfully exchange the data between Xavier A and Xavier B using DW API.
Just for clarification, the Drive OS document provides explanation for inter-chip communication. Few questions on those points

  1. Does inter-chip mean communication mean between Xavier A and B or between 2 platforms ?
  2. If Drive AGX Xavier does not support it, then which hardware supports it? Does current version of Pegasus support it? Is later version of Drive OS going to support it for Drive AGX Xavier?
  3. Is ethernet based data transfer used as underlying implementation for inter-chip communication or is it some kind of shared memory that will be used?
    Please support me provide these details at earliest, as this will help me select the exact hardware for actual development as my first level PoC is now working on Xavier.

Dear @aniruddha.nadgouda,

  1. Inter chip means, it is between Tegra A and Tegra B
  2. C2C communication is not currentl enabled on Xavier for devzone release. Please contact your NVIDIA representative if you need any further support on this.
  3. C2C is different communication method. It does not use ethernet.

Hi,
I have similar questions. What is C2C ? Does it refer to TegraA on board1 and Tegra? on board 2 (I am referring to Pegasus) ? Are there any other APIs (other than DW socket API referred to in this thread, and NvScilpc) for any possible permutation of communication ?

Thanks.

Dear @ai.trucker,
C2C communication refers to communication across Tegra A and Tegra B.

Are there any other APIs (other than DW socket API referred to in this thread, and NvScilpc) for any possible permutation of communication ?

No

Can you please confirm if C2C is enabled on the pegasus boards ? Would like to clarify this in context of your comment earlier.

Hi @ai.trucker,

C2C communication over NvSciIPC isn’t supported in devzone releases.

Does that mean that for now our choice is only “DW socket API” ? Any idea of when will it be supported ?

For DriveWorks, yes, it’s the API.
For Linux, there should be some other middleware, e.g. DDS.

Hello @VickNV
I found this reply about the difference between some communication methods that can be used by Xavier SoC. In the response, it is claimed that C2C can be used to communicate two SoC’s over PCIe. Since C2C communication is not supported over NvSciIPC, what other alternatives do we have to achieve a C2C communication between two SoC’s over PCIe?
Thanks in advance for your reply

Hi @VickNV,

Any idea till when can support for NvSciIPC be provided for C2C as I believe ther will be uniformity for IPC in application if a has few inter-process/thread and inter-chip (C2C) communication which makes it easier to create an application.

Also wanted to understand if possible what method will NvSciIPC use for C2C as inter-processor and inter-thread use shared memory. Will it be using socket based communication over PCIe switch enabled IP to achieve 10Gbps speed or what?

Sorry for my reply isn’t clear. I meant C2C isn’t available in devzone releases.

For the schedule question, please contact with your nvidia representive. Usually we don’t discuss it in this forum. Sorry for any inconvenience.

C2C is via PCIe Non-Transparent Bridge (NTB). Thanks.

What path does “DW socket API” use ?
Thanks.

Please refer to the IPC module document.