TX2 PCIe configurations

Dear NVIDIA,

I mounted a NVME SSD (Gen3 x 4) to TX2 module, and I would like to see some diagnosis information about this device. The startup log of probing the SSD is printed as follows,

roland@ubuntu:~$ dmesg | grep pcie
[    0.436695] iommu: Adding device 10003000.pcie-controller to group 47
[    0.436710] arm-smmu: forcing sodev map for 10003000.pcie-controller
[    0.892965] tegra-pcie 10003000.pcie-controller: 2x1, 1x1, 1x1 configuration
[    0.893679] tegra-pcie 10003000.pcie-controller: PCIE: Enable power rails
[    0.893962] tegra-pcie 10003000.pcie-controller: probing port 0, using 2 lanes
[    0.896112] tegra-pcie 10003000.pcie-controller: probing port 2, using 1 lanes
[    1.321790] tegra-pcie 10003000.pcie-controller: link 2 down, retrying
[    1.727808] tegra-pcie 10003000.pcie-controller: link 2 down, retrying
[    2.129792] tegra-pcie 10003000.pcie-controller: link 2 down, retrying
[    2.131819] tegra-pcie 10003000.pcie-controller: link 2 down, ignoring
[    2.238057] tegra-pcie 10003000.pcie-controller: PCI host bridge to bus 0000:00
[    2.250607] pcieport 0000:00:01.0: Signaling PME through PCIe PME interrupt
[    2.250620] pcie_pme 0000:00:01.0:pcie001: service driver pcie_pme loaded
[    2.250754] aer 0000:00:01.0:pcie002: service driver aer loaded

The SSD should be on port 0. However I am not very sure about what exactly

pcie-controller: 2x1, 1x1, 1x1 configuration

means.

Does this “2x1” mean “Gen2 x 1 lane”, or " 2 ports with 1 lane each", or “1 port using 2 lanes” ?

And is there a way I can find out the exact current PCIe lane configuration of this SSD device ?

Thank you.

/Roland

Are you using a NVIDIA devkit or a custom board made by yourself?

2x1 means there are 2 “one lane” configuration.

We have a mapping table as below in the TX2 product design guide. Configuration 3,4,5,6 are this kind.

Configuration #5 as example, 3 pcie controllers are enabled. PCIE#0_x, PCIe#1_x and PCIe#2_x.
The mapping to them is 2 one lane, 1 one lane and 1 one lane.

image

This is decided in the device tree, so the hardware board design is related.

Thanks for the explanation. I am using my own carrier board.

Below is the output from command line ‘sudo lspci -vv’

01:00.0 Non-Volatile memory controller: Sandisk Corp Device 5007 (rev 01) (prog-if 02 [NVM Express])
        Subsystem: Sandisk Corp Device 5007
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisI
NTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- IN
Tx-
        Latency: 0
        Interrupt: pin A routed to IRQ 379
        Region 0: Memory at 40100000 (64-bit, non-prefetchable) [size=16K]
        Region 4: Memory at 40104000 (64-bit, non-prefetchable) [size=256]
        Capabilities: [80] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [90] MSI: Enable- Count=1/32 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [b0] MSI-X: Enable+ Count=17 Masked-
                Vector table: BAR=0 offset=00002000
                PBA: BAR=4 offset=00000000
        Capabilities: [c0] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <1us, L1 unlimited
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
                DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L0s <256ns, L1 <8us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 5GT/s, Width x2, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range B, TimeoutDis+, LTR+, OBFF Not Supported
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR+, OBFF Disabled
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
                         EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
        Capabilities: [150 v1] Device Serial Number 00-00-00-00-00-00-00-00
        Capabilities: [1b8 v1] Latency Tolerance Reporting
                Max snoop latency: 0ns
                Max no snoop latency: 0ns
        Capabilities: [300 v1] #19
        Capabilities: [900 v1] L1 PM Substates
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+
                          PortCommonModeRestoreTime=32us PortTPowerOnTime=10us
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                           T_CommonMode=0us LTR1.2_Threshold=0ns
                L1SubCtl2: T_PwrOn=10us
        Kernel driver in use: nvme

I noticed in the printouts that the linkCap is 8GT/s, while the current link status is 5GT/s only. The SSD can do Gen3 speed of 8GT/s though, so is that because of TX2 only support Gen2, or my board has bad link quality?

I noticed the training flag in link status is 'Train-", not sure that says something.

/Roland

Yes, please refer to the datasheet.

TX2 PCIe is only up to gen2.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.