The documentation for the PCIe controller on the Tegra K1 states that both Gen1 and Gen2 speeds are supported, but I’ve been unable to make Gen2 speeds work with the mini-PCIe slot on the Jetson kit.
I’ve added an Intel i350-AM2 on the mini-PCIe, with the internal topology then looking like this:
One thing to note is that the onboard RTL8111 is Gen1, so is it forcing the entire bridge to operate in Gen1 mode? If so, is there any way to disable the on-board ethernet? I tried poking around in /proc a bit to see if I could disable the power on the device, but haven’t come up with anything so far. Any ideas would be welcome… :-)
I’m curious, what is the lspci -vvv of this device? What’s the exact model of this NIC, and is there more than one function on the device (perhaps each function has 2.5GT/s and add to 5GT/s)?
As root which removed the 8111 and the port on the pcie bridge. lspci reflects this change - it doesn’t even show the realtek or the port anymore - but nothing changes. I reset the remaining PCIe devices but they still come up as 2.5GT/s (and perform poorly).
This particular mPCIe slot has a USB DP/DM USB differential pair on it which can be used instead of the PCIe TX/RX. I don’t have one of these NICs, so I am still curious about how they chose to wire it, and looking for clues as to whether this could be part of the answer. Thus the reason for wanting to know the lspci -vvv and the exact NIC model. Some manufacturers may have chosen to take a USB version and make a mini-PCIe by skipping the actual PCIe TX/RX, or perhaps even using both…if they did use USB I’m wondering if the USB is sharing with something else (does lsusb show anything?)…I’m just exploring the possibilities. The PCIe lanes are themselves packet switched, so I’m thinking that sharing the TX/RX with another device would not account (by itself) for the lower speed. I’m leaning towards the idea of the driver not choosing a higher speed.
I built the card. it works fine at 5GT/sec in a Kabini board I have, so I know it works in general (it even works in the Jetson, it just maxes out at about 1.4Gbps in each direction because of the bad lane speed). The Tegra also has a strange limitation with the maximum PCIe packet size (at least on the Jetson) being 128 bytes, which is half the size of a typical PCIe configuration (sadly adding to the overhead), although I don’t believe that should impact the Gen1 vs. Gen2 negotiation.
I’m leaning towards the Tegra bridge not being able to operate at more than one speed - I’m going to get another Jetson and remove the realtek from the board and see if that changes anything, barring any other suggestions.
For reference, here is the lspci -vvv output from both endpoints and the bridge port:
Is your Kabini board running linux? I’m guessing it is, and if so, would also use pcieport driver. I’m not all that familiar with Kabini, but it looks to be fully x86, whereas Jetson is ARMv7.
I had first thought perhaps you were using PHY, data, and transaction layer in a way that restricted the driver to 2.5GT/s, but the fact that Kabini made it to 5GT/s with the x86 version of the same driver means this wouldn’t be true (plus it seems the chip you are using has built in PCIe support). On the other hand, even if both systems use linux pcieport driver, there would be significant ARMv7-x86 differences between the two drivers. The tegra124 SoC has a register which could be set to force speed restriction. See the TegraK1 TRM:
30.3.5.13 T_PCIE2_RP_LINK_CONTROL_STATUS_2 (page 2112)
Since Jetson is a development board and not for general use I would suspect perhaps pcieport driver/hardware development was simplified by setting that driver to 2.5GT/s.
I happen to have a Realtek wired NIC in that slot, and it has the same issue…5GT/s capability, 2.5GT/s actual. It’s possible that PCIe would be intentionally throttling back the rate if it detected signals requiring this for quality reasons, but then both our NICs would have this issue and would tend to mean the trace routing on the Jetson itself was an issue (do you see errors in logs?). Had trace routing been the issue I think general operation of the mPCIe would be a bit more “flaky” (that’s a technical term!). You might want to check on that register to see if the driver on the Jetson version was told to limit to 2.5GT/s.
I doubt link control status is the problem (because the bridge does actually report 5GT/sec capacity, and I think it wouldn’t do that if the registers were set to Gen1).
At the moment I’m leaning towards the problem being the integrated RTL8111. I have another few boards on the way, so I’ll try removing that chip if all else fails. (Going to try building a new system using u-boot first and see if I can disable it in software).
I’m not sure, but it would seem capability might be part of a hardware query…in which case the PHY/link would show what they could do but not necessarily that the driver isn’t currently in that state. I would not be surprised either way…whether it is the RTL8111 or the driver…but I would still lean strongly towards the driver doing this.
If you can afford to pop the RTL8111 off, this is probably the fastest/easiest way to test. Should it still fail, I would be very suspicious of link training to be intentionally limited via that register. If I were developing the PCIe driver, I’d certainly add a log message when a device tries for 5GT/s and fallback to 2.5GT/s is required, but I have not seen any such message on my own mini-PCIe.
When you find out, I’ll be curious to know what happens.
When I force my design to connect at 2.5 GT/s it works fine, so I’m thinking this is a signal integrity issue. In my case I have a mini PCIe to PCIe adapter:
Those messages are pretty much a smoking gun guarantee you are right about signal integrity. What I still have to wonder about is that the original poster did not see error messages. It’s possible that he was not “forcing” the issue and thus PCIe drivers had no reason to report signal problems, that they just did their job and switched to the best reliable speed without complaint. In that case it is very hard to know with certainty without some very expensive tools to actually view signals.
What was the exact method of forcing speed to Gen2? I might experiment with that on one of my mPCIe slot cards.
Yes, I think that’s likely the case. I also tested a design that will do gen2 but usually connects at gen1 speeds to the K1. When it occasionally connects at gen2 speeds I get errors.
In this case the PCIe endpoint is inside an Altera FPGA, which gives me lots of control.
Sorry - just reread what you actually posted. I didn’t force my design to gen2 - I forced it to gen1. The Tegra occasionally negotiates gen2 with my board but it doesn’t actually work reliably.
It occurred to me last night that the Tegra may not have tried gen2 at all until I installed the R21.1 update. I’m in the middle of things right now so I don’t want to go back to R19.3 but it could be that the gen1/gen2 thing is something that got fixed in R21.1. The original post was before R21.1 was out.
The R19 software releases forced PCIe to gen1 speed, but is capable of gen2. R21 releases detect the capability of the cards on the bus and should negotiate gen1/gen2 properly based upon the card.
Something like the following in R19.3 can force gen2, but best to move forward to R21:
clk_set_rate(tegra_pcie.pcie_mselect, 408000000);
/* pciex is reset only but need to be enabled for dvfs support */
err = clk_enable(tegra_pcie.pcie_xclk);
if (err) {
@@ -1769,7 +1769,7 @@ static void tegra_pcie_enable_features(void)
PR_FUNC_LINE;
/* configure all links to gen2 speed by default */
if (!tegra_pcie_link_speed(false))
if (!tegra_pcie_link_speed(true))
pr_info(“PCIE: No Link speed change happened\n”);
This makes me curious as to the nature of gen1 versus gen2 speed failing on gen2 from R19.x versus R21.1. Restated, would software on R21.1 have exactly the same issues versus R19.3 with the card listed above that had to be backed down to gen1 speeds? How much is trace layout on the card, trace layout on the Jetson, and software? (Just a rhetorical question because the answers are interesting and hard to know).