Hardware Design Inquiry: Implementing 4-Port LAN on Orin Nano using Native GbE and PCIe

Hello,

I am currently designing a custom carrier board for the Jetson Orin Nano SOM. My goal is to implement a total of 4 Gigabit Ethernet (LAN) ports on this carrier board.

The planned architecture is as follows:

  1. Port 1: Uses the Orin Nano’s Native GbE interface (RGMII).

  2. Ports 2~4: Use the PCIe(PCIe1,2,3) interface connecting to Microchip LAN7430 (PCIe to Gigabit Ethernet Bridge) controllers.

I would like to ask a few technical questions regarding this design:

1. Simultaneous Usage of Native GbE and PCIe LAN Are there any known issues, resource conflicts, or bandwidth limitations when using the Native GbE and PCIe-based LAN controllers simultaneously on the Orin Nano? I want to ensure that all 4 ports can operate at full speed without stability issues.

2. PCIe Switch Configuration To implement multiple LAN ports via PCIe, I am considering using a PCIe Packet Switch to connect multiple LAN7430 chips to a single PCIe Root Port (e.g., UPHY0 or UPHY1).

  • Is this topology (PCIe Switch → Multiple LAN7430s) fully supported by the standard JetPack driver/BSP?

  • Will the device enumeration and communication function normally in this configuration?

3. EtherCAT Compatibility over PCIe My application requires EtherCAT Master functionality.

  • Is the PCIe-based LAN (specifically using LAN7430) suitable for EtherCAT communication in terms of latency and jitter?

  • Are there any recommended configurations to ensure real-time performance when using a PCIe NIC for EtherCAT on Orin Nano?

4. Expanding Native GbE with an Ethernet Switch As an alternative to using PCIe for all extra ports, is it possible to connect an Ethernet Switch IC directly to the Native GbE (RGMII) interface instead of a standard PHY? I am considering this to expand the native port into multiple ports.

I would appreciate your expert advice on these design considerations.

Thank you.

*** Please note that this reply is generated by LLM automatically ***

I’ll address each of your questions regarding the design of a custom carrier board for the Jetson Orin Nano SOM.

1. Simultaneous Usage of Native GbE and PCIe LAN

The Jetson Orin Nano SOM has a native Gigabit Ethernet (GbE) interface (RGMII) and multiple PCIe interfaces. According to the documentation, there are no known issues or resource conflicts when using the native GbE and PCIe-based LAN controllers simultaneously. However, it’s essential to ensure that the PCIe interfaces are configured correctly and that the bandwidth requirements of all four ports are met.

The Orin Nano’s native GbE interface is connected to the UPHY0 lane, which is a dedicated lane for the RGMII interface. The PCIe interfaces (UPHY1 and UPHY2) are separate from the native GbE interface and can operate independently.

To ensure that all four ports can operate at full speed without stability issues, you should:

  • Verify that the PCIe interfaces are configured correctly and that the bandwidth requirements of all four ports are met.
  • Ensure that the power consumption of the LAN7430 chips and the PCIe switch (if used) are within the power budget of the Orin Nano.
  • Test the simultaneous usage of the native GbE and PCIe-based LAN controllers to ensure that there are no stability issues.

2. PCIe Switch Configuration

Using a PCIe packet switch to connect multiple LAN7430 chips to a single PCIe Root Port is a supported topology. The standard JetPack driver/BSP should support this configuration, and device enumeration and communication should function normally.

However, it’s essential to ensure that the PCIe switch is configured correctly and that the LAN7430 chips are properly connected to the switch. You should also verify that the PCIe switch is compatible with the Orin Nano’s PCIe interfaces.

To ensure that the PCIe switch configuration works correctly:

  • Verify that the PCIe switch is configured correctly and that the LAN7430 chips are properly connected to the switch.
  • Ensure that the PCIe switch is compatible with the Orin Nano’s PCIe interfaces.
  • Test the PCIe switch configuration to ensure that device enumeration and communication function normally.

3. EtherCAT Compatibility over PCIe

The LAN7430 chip is a PCIe-to-Gigabit Ethernet Bridge controller, and it should be suitable for EtherCAT communication in terms of latency and jitter. However, the suitability of the LAN7430 chip for EtherCAT communication depends on the specific requirements of your application.

To ensure that the PCIe-based LAN is suitable for EtherCAT communication:

  • Verify that the LAN7430 chip meets the latency and jitter requirements of your EtherCAT application.
  • Ensure that the PCIe interface is configured correctly and that the LAN7430 chip is properly connected to the PCIe switch (if used).
  • Test the EtherCAT communication to ensure that it meets the requirements of your application.

4. Expanding Native GbE with an Ethernet Switch

Connecting an Ethernet Switch IC directly to the Native GbE RGMII interface is a possible alternative to using PCIe for all extra ports. This approach can expand the native port into multiple ports.

However, it’s essential to ensure that the Ethernet Switch IC is compatible with the Orin Nano’s native GbE interface and that the RGMII interface is configured correctly.

To ensure that the Ethernet Switch IC works correctly:

  • Verify that the Ethernet Switch IC is compatible with the Orin Nano’s native GbE interface.
  • Ensure that the RGMII interface is configured correctly and that the Ethernet Switch IC is properly connected to the RGMII interface.
  • Test the Ethernet Switch IC to ensure that it functions normally and that the expanded ports meet the requirements of your application.

In summary, your design considerations are valid, and with proper configuration and testing, you should be able to achieve your goals. However, it’s essential to ensure that all components are compatible and configured correctly to avoid any stability issues or performance degradation.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

Jetson Orin Nano is listed at the link provided in this post about EtherCAT:

You menioned EtherCAT Master functionality, but note this comment about using Jetson Orin Nano as an EtherCAT slave:

Merging post on the same topic from Best Ethernet Architecture for 4 Independent GbE Ports on Jetson Orin Nano

Hello,

We are designing a custom carrier board based on Jetson Orin Nano SoM.

Our target configuration requires 4 independent Gigabit Ethernet ports
for AMR / EtherCAT master applications.

We are evaluating three possible architectures:

  1. PCIe switch (x1 upstream) + 3x LAN7430 (downstream) + 1x native GbE

  2. Native GbE connected to an external Ethernet switch IC (4 ports)

  3. Native GbE + PCIe1/2/3 each directly connected to LAN7430
    (using gbe-uphy-config-9 to split PCIe2 into x1 + x1 and enable PCIe3)

We would like to understand:

  • Is gbe-uphy-config-9 (UPHY2 split mode) fully validated and production-ready
    for stable PCIe3 root port operation?

  • Are there any known limitations when using three independent PCIe x1 NICs
    simultaneously?

  • From NVIDIA’s perspective, which architecture is recommended
    for long-term stability and support?

I don’t know of any limitations but am checking these two questions with my team.

A better way is using a quad port Intel i350 LAN controller. i350 is a very capable and stable chip with very good Linux support. You would save the PCIe switches.

There are M.2 key M boards for this if you’d like to test this suggestion on a devkit.

Example:

https://www.commell.com.tw/product/Peripheral/M.2%20(NGFF)%20card/M2-350.htm

You could report issue if you found something that is not stable to you.

Lots of cases could lead to problem so the issue is not just whether NVIDIA fully validated it or not.

For example, we do see some custom board changed hardware and leads to problem. Thus, even though we guarantee it would work, custom board design may break that.

Is it correct that there are no architectural restrictions or resource conflicts when using the native GbE and PCIe LAN interfaces simultaneously?
and Our requirement is to provide 4x LAN ports on the carrier board, but we must ensure minimal jitter and low latency for real-time motor control. We are evaluating the following three architectures and would like your recommendation on which is most effective regarding PCIe bandwidth allocation and determinism:

  • Option 1: PCIe Switch Expansion

    • PCIe Switch (x1 upstream) + 3x LAN7430 (downstream) + 1x Native GbE.
  • Option 2: Ethernet Switch IC

    • Native GbE connected to an external 4-port Ethernet Switch IC (shared MAC).
  • Option 3: Direct PCIe Connection (Multi-lane)

    • 1x Native GbE + 3x LAN7430, each directly connected to independent PCIe lanes (PCIe1, PCIe2, PCIe3).

Questions:

  1. From a latency and jitter perspective, does the use of a PCIe switch (Option 1) significantly impact the real-time performance of SOEM compared to direct PCIe connections (Option 3)?

  2. Are there any known issues or driver conflicts when running multiple LAN7430 controllers simultaneously on the Orin Nano under a PREEMPT_RT patched kernel?

  3. Given the limited PCIe lanes on the Orin Nano, which architecture would you recommend for the most stable industrial communication?

Ethercat is mostly 100 MBit/s, so it doesn’t really matter if you choose option 1 or 3.

If PCIe0x4 is available then I’d suggest something like the PI7C9X2G608EL with a x4 uplink port and 4 or 5 x1 downlink ports. This avoids a possible bandwidth bottleneck.

https://www.diodes.com/part/view/PI7C9X2G608EL?BackID=229

What does matter is the choice of Ethernet MAC if you have heavy traffic. Intel Ethernet controllers have more offloading capabilities than Realtek or Microchip controllers: IP checksumming, UDP checksumming, TCP checksumming, interrupt coalescion (less interrupt load) etc. An Intel chip (eg the i350-AM4, available on mouser.com) needs less cpu cycles for the same task then the others. Realtek and Microchip chips are also able to saturate a network, but they cause more CPU and PCIe load.

You should be aware of this fact in order to make an informed decision.

I am planning to use the Intel I350-AM4 Ethernet controller to take advantage of its hardware offloading capabilities (TCP/UDP checksumming, interrupt coalescing, etc.), which requires a PCIe x4 interface for optimal performance as mentioned in several technical discussions.

However, I originally intended to use the PCIe 0 (x4) lane for an M.2 Key-M NVMe connector to serve as the primary boot and storage drive, as I want to entirely eliminate the use of a Micro SD card.

My proposed configuration is as follows:

  1. PCIe 0 x4 Lane: Allocated to Intel I350-AM4 for 4-port Gigabit Ethernet.

  2. PCIe 2 x2 Lane: Allocated to M.2 Key-M (NVMe SSD) for primary storage.

My questions are:

  • If I design the M.2 slot using only PCIe x2 lanes, will it be fully functional as the primary boot device and root filesystem without needing a Micro SD card at all?

  • Are there any known performance bottlenecks or stability issues when running the OS on an NVMe SSD constrained to x2 lanes on the Jetson Orin Nano?

  1. It seems that you only have seen the devkit version of the Orin Nano module. The devkit module has got a microSD slot. The production version has not.
  2. Orin Nano and Orin NX ALWAYS boot from the onboard QSPI flash. There is the bootloader which loads the rest of the binaries from secondary storage. Writing to the SSD is not enough. You MUST use SDKManager or the underlying flash skripts to fill the QSPI and the SSD via USB0.
  3. SSDs even work with only a single lane just fine. Look at the Raspberry Pi5 solution - there is only a single lane available. Especially the smaller SSDs (2230 format) with only one flash chip won’t get much slower. The performance limits are more noticible with high-capacity, high-speed SSDs that can read and write multiple flash chips in parallel. I guess you don’t need these.
  4. You are supposed to have read the PCIe specification (Base and M.2 Spec). This is the source of truth.