NVHS L4 to L7 can't recognize SSD

Hello there,

I have two SSDs. First one I call it SSD2, the other I call it SSD1

Let’s talk about the SSD2 first. I’m checking if I can have UPHY1(L4 to L7) works with SSD2.

HS_UPHY1_L4_TX_P
HS_UPHY1_L4_TX_N
HS_UPHY1_L5_RX_P
HS_UPHY1_L5_RX_N
HS_UPHY1_L5_TX_P
HS_UPHY1_L5_TX_N
HS_UPHY1_L6_RX_P
HS_UPHY1_L6_RX_N
HS_UPHY1_L6_TX_P
HS_UPHY1_L6_TX_N
HS_UPHY1_L7_RX_P
HS_UPHY1_L7_RX_N
HS_UPHY1_L7_TX_P
HS_UPHY1_L7_TX_N

Hers is my device tree, according to AGX PCIE, The UPHY1 is enabled as default.

pcie@141a0000 {
        status = "okay";

        vddio-pex-ctl-supply = <&vdd_1v8_ls>;
        vpcie3v3-supply = <&vdd_3v3_pcie>;
        vpcie12v-supply = <&vdd_12v_pcie>;

        phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>,
               <&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>,
               <&p2u_nvhs_6>, <&p2u_nvhs_7>;
        phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3", "p2u-4",
                    "p2u-5", "p2u-6", "p2u-7";
};

And I’m using PCIE7 clkreq and rst below.

pex_l7_clkreq_n_pag0 {
        nvidia,pins = "pex_l7_clkreq_n_pag0";
        nvidia,function = "pe7";
        nvidia,pull = <TEGRA_PIN_PULL_NONE>;
        nvidia,tristate = <TEGRA_PIN_DISABLE>;
        nvidia,enable-input = <TEGRA_PIN_ENABLE>;
        nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>;
        nvidia,lpdr = <TEGRA_PIN_DISABLE>;
};

pex_l7_rst_n_pag1 {
        nvidia,pins = "pex_l7_rst_n_pag1";
        nvidia,function = "pe7";
        nvidia,pull = <TEGRA_PIN_PULL_NONE>;
        nvidia,tristate = <TEGRA_PIN_DISABLE>;
        nvidia,enable-input = <TEGRA_PIN_DISABLE>;
        nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>;
        nvidia,lpdr = <TEGRA_PIN_DISABLE>;
};

After booting done, I can’t see the SSD2 is mounted…also the lspci doesn’t show pci information.

However, the SSD1 is able to work with UPHY0(L4 to L7)

HS_UPHY0_L4_RX_P
HS_UPHY0_L4_RX_N
HS_UPHY0_L4_TX_P
HS_UPHY0_L4_TX_N
HS_UPHY0_L5_RX_P
HS_UPHY0_L5_RX_N
HS_UPHY0_L5_TX_P
HS_UPHY0_L5_TX_N
HS_UPHY0_L6_RX_P
HS_UPHY0_L6_RX_N
HS_UPHY0_L6_TX_P
HS_UPHY0_L6_TX_N
HS_UPHY0_L7_RX_P
HS_UPHY0_L7_RX_N
HS_UPHY0_L7_TX_P
HS_UPHY0_L7_TX_N

And SSD1 using PCIE 4 clkreq and rst

pex_l4_clkreq_n_pl0 {
        nvidia,pins = "pex_l4_clkreq_n_pl0";
        nvidia,function = "pe4";
        nvidia,pull = <TEGRA_PIN_PULL_NONE>;
        nvidia,tristate = <TEGRA_PIN_DISABLE>;
        nvidia,enable-input = <TEGRA_PIN_ENABLE>;
        nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>;
        nvidia,lpdr = <TEGRA_PIN_DISABLE>;
};

pex_l4_rst_n_pl1 {
        nvidia,pins = "pex_l4_rst_n_pl1";
        nvidia,function = "pe4";
        nvidia,pull = <TEGRA_PIN_PULL_NONE>;
        nvidia,tristate = <TEGRA_PIN_DISABLE>;
        nvidia,enable-input = <TEGRA_PIN_DISABLE>;
        nvidia,io-high-voltage = <TEGRA_PIN_ENABLE>;
        nvidia,lpdr = <TEGRA_PIN_DISABLE>;
};

After booting done, I can see the SSD1 is mounted. lspci also has pci information

ubuntu@tegra-ubuntu:~$ lspci
0004:00:00.0 PCI bridge: NVIDIA Corporation Device 229c (rev a1)
0004:01:00.0 Non-Volatile memory controller: ADATA Technology Co., Ltd. Device 1202 (rev 01)

Here is completed DT :
extracted_proc.txt (393.6 KB)

My ODMDATA=“gbe-uphy-config-22,hsstp-lane-map-3,nvhs-uphy-config-0,hsio-uphy-config-0,gbe0-enable-10g”;

Am I having wrong configuration for UPHY1?

*** Please note that this reply is generated by LLM automatically ***

To address your question about configuring UPHY1 for SSD2 on the Jetson AGX Orin, let’s break down the information provided and compare it with the known configurations and settings for the Jetson series, particularly focusing on PCIe and UPHY settings.

  1. Understanding UPHY and PCIe: UPHY (Universal PHY) is a physical layer transceiver that can be configured for various interfaces, including PCIe, USB, and more. The Jetson AGX Orin series, like other Jetson modules, utilizes UPHY for its high-speed interfaces. PCIe (Peripheral Component Interconnect Express) is a high-speed interface standard used for connecting peripherals like SSDs (Solid State Drives) to the system.

  2. Your Configuration: You’re trying to configure UPHY1L4 to L7 for SSD2, using PCIE7 clkreq and rst. However, after booting, you cannot see SSD2 mounted, and lspci does not show any PCI information for it. In contrast, SSD1 works fine with UPHY0L4 to L7, using PCIE4 clkreq and rst, and you can see it mounted with lspci showing the expected PCI information.

  3. Comparison with Known Configurations:

    • hsio-uphy-config-40 and hsio-uphy-config-0 settings provide configurations for UPHY0 (HSIO) that use USB 3.2 for lanes 0 and 1 and PCIe x1 for lane 3, indicating flexible configurations for UPHY lanes.
    • The gbe-uphy-config-8 setting uses PCIe x2 for lane 0, showing another example of UPHY configuration for a specific use case.
    • The Jetson P3767 SOM has USB ports and is designed for the NVIDIA Jetson Orin series, suggesting that USB and PCIe configurations are crucial for peripheral connectivity.
  4. Potential Issues:

    • Incorrect Device Tree Configuration: The device tree configuration might not be correctly set up for UPHY1 to work with SSD2. The fact that SSD1 works with UPHY0 suggests that the issue might be specific to the UPHY1 configuration.
    • PHY Lane Configuration: The configuration of the PHY lanes (L4 to L7 for UPHY1) might not be appropriate for SSD2 or might conflict with other settings.
    • Clock and Reset Signals: The use of PCIE7 clkreq and rst for SSD2, as opposed to PCIE4 for SSD1, could indicate a timing or signal integrity issue affecting the detection and operation of SSD2.
  5. Recommendations:

    • Review Device Tree: Double-check the device tree configuration for UPHY1 and ensure it matches the requirements for SSD2. Compare the working configuration for UPHY0 and SSD1 to identify any discrepancies.
    • PHY Lane Configuration: Verify that the PHY lanes are correctly configured for SSD2. This might involve checking the datasheet for the Jetson AGX Orin or consulting with NVIDIA support for specific guidance on UPHY configurations.
    • Clock and Reset Signals: Ensure that the clock and reset signals for PCIE7 are correctly configured and do not interfere with the operation of SSD2.

Given the complexity of device tree configurations and the specificity of UPHY and PCIe settings, it’s possible that a detailed comparison with a known working configuration (like that of SSD1) and a thorough review of the device tree and PHY settings will reveal the issue. If the problem persists, consulting the official NVIDIA documentation or seeking support from NVIDIA forums or directly from NVIDIA might provide more tailored advice.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

UPHY1 in RP mode for C5. So I took nvhs-uphy-config-0 and pcie@141a0000. I think it is correct. Right?

Hi,

Actually the Orin AGX default device tree already enabled UPHY1 for PCIe C5. So you don’t need to configure any software there for it.

Okay. I don’t change it.

Here is my debug message.
putty.log (72.5 KB)

So… is the configuration of UPHY1 fine for M.2 SSD?

BTW, my JP is 6.2

I thought I need to change pcie@141a0000(C5) to pcie@141e0000(C7) due to my ssd2 uses the pex_l7 clkreq…

However, if I change it to pcie@141e0000. The device will have problem during booting.

▒▒ERROR:   Exception reason=0 syndrome=0xbe000011
ERROR:   **************************************
ERROR:   RAS Uncorrectable Error in IOB, base=0xe010000:
ERROR:          Status = 0xec000612
ERROR:   SERR = Error response from slave: 0x12
ERROR:          IERR = CBB Interface Error: 0x6
ERROR:          Overflow (there may be more errors) - Uncorrectable
ERROR:          MISC0 = 0xc45e0040
ERROR:          MISC1 = 0x194c860000000000
ERROR:          MISC2 = 0x0
ERROR:          MISC3 = 0x0
ERROR:          ADDR = 0x8000000003e900c0
ERROR:   **************************************
ERROR:   sdei_dispatch_event returned -1
ERROR:   **************************************
ERROR:   RAS Uncorrectable Error in ACI, base=0xe01a000:
ERROR:          Status = 0xe8000904
ERROR:   SERR = Assertion failure: 0x4
ERROR:          IERR = FillWrite Error: 0x9
ERROR:          Overflow (there may be more errors) - Uncorrectable
ERROR:          ADDR = 0x8000000003e900c0
ERROR:   **************************************
ERROR:   sdei_dispatch_event returned -1
ERROR:   Powering off core

Hi,

What is your hardware design exactly? The clkreq and reset pins are fixed one. You could not use the C7 clkreq/reset pins for C5.

The hardware designs these pins to work with a M.2 SSD。

HS_UPHY1_L4_TX_P
HS_UPHY1_L4_TX_N
HS_UPHY1_L5_RX_P
HS_UPHY1_L5_RX_N
HS_UPHY1_L5_TX_P
HS_UPHY1_L5_TX_N
HS_UPHY1_L6_RX_P
HS_UPHY1_L6_RX_N
HS_UPHY1_L6_TX_P
HS_UPHY1_L6_TX_N
HS_UPHY1_L7_RX_P
HS_UPHY1_L7_RX_N
HS_UPHY1_L7_TX_P
HS_UPHY1_L7_TX_N


PEX_CLK7P
PEX_CLK7N
PEX_L7_CLKREQ_N
PEX_L7_RST_N

As you see, I modify the pinmux and device tree. And I think my UPHY1 and CLK configuration is fine. But I still can’t see the SSD is mounted.

This design is not valid. As I already told, you cannot use the C7 clkreq/reset pins for C5.

So…does it mean if I have pcie in C5, only can use PEX_L5_CLKREQ_N and PEX_L5_RST_N.

If in C6, only can use PEX_L6_CLKREQ_N and PEX_L6_RST_N and so on…

Yes, correct.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.