PCIe Xavier to Xavier communication

Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”):
Software Version
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
Linux
QNX
other

Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
1.6.0.8170
1.5.1.7815
1.5.0.7774
other

Host Machine Version
native Ubuntu 18.04
other

Hello all,

I am trying to communicate two Xavier SoC’s over PCIe. My objective is to get some benchmarking figures, as well as learning what are the API’s available to achieve this. So far I have read that DriveWorks Socket API is the recommended way of communicating two Xavier SoC’s, so my question is:

  • Does the DW Socket API can be used over PCIe?
  • Is there another way of transferring data between SoC’s using PCIe?

So far, I have not being able to get these answers from the DriveWorks Docs.

Thanks in advance

Please see if Non-Transparent Bridging and PCIe Interface Communication helps.

1 Like

That is indeed very useful and the type of communication I was looking for.
After creating a virtual ethernet over PCIe using a Non-Crosslink NTB connection (as shown in the link provided), what API’s are available to access that virtual ethernet interface? Is it accessible through the DW Socket API?

Even iperf can run on it. I think socket api should work.
I’m checking internally for DW Socket API support and will get back to you. Thanks.

Hello @VickNV
Thanks a lot for your answers, they have been extremely helpful.
I have a doubt regarding a claim I saw in another post. According to this answer, C2C communication over PCIe is not supported. Nonetheless, I successfully followed the steps you provided me in Non-Transparent Bridging and PCIe Interface Communication, specifically, I could run lspci and see the PCIe devices!
I have successfully created a virtual ethernet interface over NTB PCIe as instructed.

My question is:
Is this connection really using PCIe? Or is it taking another HW path?

I meant C2C (INTER_CHIP in Inter-Process Communication) isn’t supported in DevZone releases.

Understood. I am not very familiar with the terminology. Is there any way to know if my current version of DriveOS is a DevZone release?
If I am able to complete the steps in Non-Transparent Bridging and PCIe Interface Communication , does mean that I am using a PCIe communication?

If log in SDK Manager with “NVIDIA DEVELOPER” to install/flash your target system, yours has a devzone release.


Or running the command to check if it’s a devzone release listed in https://developer.nvidia.com/drive/downloads.

$ cat /etc/nvidia/version-ubuntu-rootfs.txt

Yes.

Hi VickNV,
I see few options for xavier to xavier communication in the attached picture.

  1. Highlighted in black rectangle (marked A) is communication via ethernet - Could you please explain what path is this ?
  2. PCIe Gen 3 Switch based red rectangle (marked B) is communication over PCIe x4 path given in red.
  3. 10GbE path - highlighted in yelloish rectangel (marked C)

My understanding DW Socket API being alluded to in this thread is for option B. Could you please point to the APIs for each option marked A and C ?

Thanks.

Please refer to Network Topology for AV+L PCT Config for option A and C. Thanks.

I checked internally. We haven’t verified the API. But I guess it may be working.

Hello @VickNV ,
There does not seem to be information about specific API’s in the section provided. Is it possible to use DW Sockets for this ethernet connection?
On a related note, I saw the following samplea being used in the link you provided

/home/nvidia/drive-t186ref-linux/samples/nvavb/daemons

Could you provide the path to the source codes?
Thanks in advance

It seems another topic. Please help to create another one. Thanks.

1 Like

hello @VickNV
Thanks for your replies so far.
Could you elaborate on your reply ?
We would like to have an explicit confirmation that, after following the steps in Non-Transparent Bridging and PCIe Interface Communication, a connection using DriveWorks Socket API uses the hardware path labeled as B in the attached picture:
image

Thanks a lot for your help!

We haven’t validated DriveWorks Socket API over NTB.
If it’s working, yes, it is via B path.
Let me know if anything needs to be clarified. Thanks.

That is the answer we were looking for @VickNV ! Thanks a lot for your continued support !

Hello Vick, on that link you provided, there is a reference to a document called DRIVE Dual 10G Ethernet Dongle Product Spec for setup instructions, but it is not linked nor shown where it can be found. Could you tell us the link to find this document?
We are trying to connect to the 10GbE interfaces (path C in our picture) and we need to bridge the two 10GbE switches. from the following picture: Are the white HSD connectors the 10GbE switches?

Please refer to “APPENDIX B. DUAL 10G ETHERNET DONGLE” in DRIVE AGX Developer Kit Product Brief (PDF).
Yes, the white ones are them.