Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”): Software Version
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
I have received a drive AGX Xavier DevKit pre-flashed with Linux
I am trying to establish communication between enpoints using NvSciIPC.
Interprocess communication worked fine with already provided channels in nvsciipc.cfg file
Now trying to establish communication between Xavier A and Xavier B using NvSciIPC using same piece of code just by changing the endpoints name with already provided endpoints in nvsciipc.cfg file .
The code is cross compiled on host machine whereas while running on board it gives an error saying “libnvscic2c.so is not available”.
Please find attached the log for reference:
Read: Before Init
Read: NvSciIpcInit success
!err[L:62]:nvsciipc_c2c_open_library: libnvscic2c.so is not available
!err[L:225]:nvsciipc_c2c_open_endpoint: C2C backend is not supported
!err[L:286]:NvSciIpcOpenEndpoint: Failed to open nvsciipc [INTER_CHIP] endpoint: 17
Read: NvSciIpcOpenEndpoint failed nvscic2c_3 :NvSicError = 0x11 (Endpoint creation not supported)
Error in reading from endpoint
As per the log we are getting not supported error while creating an enpoint for inter-chip communication. So is inter-chip communication supported through NvSciIPC as per the doc? If yes can you please point out the doc which details out the inter-chip communication.
Also, please can it be confirmed if the Xavier expected to have “libnvscic2c.so” available after installation or anything extra needs to be done for the same.
@SivaRamaKrishnaNV Thank you for the very prompt reply, providing the clarification.
We have already established socket communication ( Using driveWorks socket API ) to transfer the data between Xavier A and B for now between our two features running on each Xavier. The basic system works fine.
Can you please confirm if this is the best way to establish communication between the 2 Xaviers (Xavier A and Xavier B) else is there any other recommended method to exchange data between them. If there is any alternative method can you please specify where we can refer the documentation for it.
Note: There is a requirement 10MB(total size to be exchanged) of data to be exchanged between Xavier A and Xavier B per frame in 100ms.
Hi @SivaRamaKrishnaNV : Thanks for your reply I could successfully exchange the data between Xavier A and Xavier B using DW API.
Just for clarification, the Drive OS document provides explanation for inter-chip communication. Few questions on those points
Does inter-chip mean communication mean between Xavier A and B or between 2 platforms ?
If Drive AGX Xavier does not support it, then which hardware supports it? Does current version of Pegasus support it? Is later version of Drive OS going to support it for Drive AGX Xavier?
Is ethernet based data transfer used as underlying implementation for inter-chip communication or is it some kind of shared memory that will be used?
Please support me provide these details at earliest, as this will help me select the exact hardware for actual development as my first level PoC is now working on Xavier.
I have similar questions. What is C2C ? Does it refer to TegraA on board1 and Tegra? on board 2 (I am referring to Pegasus) ? Are there any other APIs (other than DW socket API referred to in this thread, and NvScilpc) for any possible permutation of communication ?
I found this reply about the difference between some communication methods that can be used by Xavier SoC. In the response, it is claimed that C2C can be used to communicate two SoC’s over PCIe. Since C2C communication is not supported over NvSciIPC, what other alternatives do we have to achieve a C2C communication between two SoC’s over PCIe?
Thanks in advance for your reply
Any idea till when can support for NvSciIPC be provided for C2C as I believe ther will be uniformity for IPC in application if a has few inter-process/thread and inter-chip (C2C) communication which makes it easier to create an application.
Also wanted to understand if possible what method will NvSciIPC use for C2C as inter-processor and inter-thread use shared memory. Will it be using socket based communication over PCIe switch enabled IP to achieve 10Gbps speed or what?