The bandwidth of of virtual ethernet over PCIe between two xaviers is low

Vidyas,
Thanks a lot for your detailed answer!
I read the WayneWWW’s topic,how to test the DMA write or read.but i cannot understand what’s mean the RC mode DMA and EP mode DMA? How can i send a file from EP-AGX to RC-AGX and recive the file by a software application?

You can enable virtual ethernet protocol on both RC-AGX and EP-AGX which provides ‘eth1’ interface on both the systems. You can use any standard network file transfers protocols to copy files (simplest being ‘scp’ tool)

Thanks! I have tried the communication between two AGXs by virtual ethernet,it works properly!But,i want to use the PCIE’S DMA to send/receive files between two AGXs ,not only to test the transfer speed.Do you hanve any suggestions?

DMA support would be available in next release. Please stay tuned to that.

Thanks a lot! about next release version,can you tell me ,when will it be release?

Hi,

It should be released after 1~2 months later.

Vidyas and Wayne,

It’s good to hear that we can get the DMA version after 1~2 months.

Let’s go back to the question that I can’t find “/sys/kernel/debug/tegra_pcie_ep/”.

In the instructions you gave me.
EP Mode DMA

Write
Go to the debugfs directory of the end point client driver
     cd /sys/kernel/debug/tegra_pcie_ep/

I just realized that I should look for this folder in RP xavier. Am I correct?

Thanks.

Yes. You are correct. You have to look in RP Xavier.

Vidyas and Wayne,
Very very Thank you!
I’m looking forward to get the DMA version.

Any update on this with the release of 4.2?

Hi,

JetPack SDK 4.2.1 has synchronous DMA support, however by design its perf is limited. We are getting ~150 Mbps.
We are working on Asynchronous DMA, this should give perf in Gbps. We are targeting it for next Jetpack(4.3) release in October.

Manikanta

Cool, thanks for the update. Just to clarify… that’s 150 mega bits per second? Is that per PCIe lane, or PCIex4?

Yes, it is mega bits per second. It is for PCIe x8.

Hi Manikanta,
Is there a confirmation on the integration in JetPack 4.3 for the async DMA support and a planed Release date for it?
Best
Marcus

i am also interested in the async version and the virtual ethernet.
Will there be official instructions on configuring and setting it up?

We’ll try to release the async DMA version in 4.3. Whenever it is available, we’ll make sure that official documentation is available on how to configure it.

It looks like 4.3 is out, and there is indeed documentation about how to get the Ethernet over PCIe to work. But the 150Mbps (or less) limit is really really slow. 1Gbps Ethernet has been pretty standard for almost 20 years now, so this feature is effectively useless - considering there is already 1Gbe support built in.

https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fxavier_PCIe_endpoint_mode.html%23wwpID0ETHA

Known Issues
The driver currently uses a synchronous DMA implementation, which by design limits performance to 150 Mbps or less. Asynchronous DMA offers higher performance, but has known issues which prevent its use at this time.

Are there reasonable alternatives, that won’t require us writing our own communications stack for PCIe, apart from adding an extra 20W for a pair of 10Gbe cards? Can the USB ports be used for fast peer to peer communications at least?

Thanks
Arunas

jup… i’ve tried this “feature”, and it’s really not usable in this form.
So now i have a useless adapter and have to build a more complex board again… really disappointing

Will this ever work any faster? Sounds almost like… never…

We have few patches yet to release with next Jetpack version. With these patches we are getting 5 Gbps TCP link speed.
Please tryout these patches and let us know your feedback.

Download patches from:
https://drive.google.com/a/nvidia.com/file/d/1OSii9BCiMxxnvvER5T8YZKVElYUfnsOS/view?usp=sharing

Thanks,
Om

I have tried the patches and i am seeing improvments.
qperf shows a latency of 1.8ms and a bandwidth of 2.77 Gbits/sec.
The connection is still detected as gen 1 and with gen 4 the results should be even higher if I’m not mistaken.
Can you expect any further improvements?