This is a continuation of a previous post.
To recap, I am trying to get my Orin SoMs to recognize an external NVME 980 GB drive and it is set up as PCIe Controller C0. I believe my pinmux is correct.
I changed the code for PCIe x1 (C0) and PCIe x8 (C7) in the files: tegra234-p3737-pcie.dtsi, tegra234-mb1-bct-pinmux-p3701-0000-a04.dtsi, & tegra234-mb1-bct-gpio-p3701-0000-a04.dtsi. I recompiled the kernel, copies files over, and flashed.
After booting without issue and logging into the SoM, running the lsblk command does not show the NVME drive present.
We have a similar setup with JetPack 5.1.2 using the Xavier AGX SoMs and it reconignes the NVME drive correctly. Our ODMDATA setting for the Xavier is (fewer parameters compared to the Orin for ODMDATA):
ODMDATA=0x09191000
The GPUs need to be in Endpoint Mode as well (thus why C5 is set to ‘nvhs-uphy-config-1’ for the Orin and 0x09191000 for the Xavier).
I have attached my configuration file used for flashing (I added a .txt extension to the file so I am able to upload here). auto2-gpgpu.conf.txt (2.6 KB)
I perform a full flash every time (sudo ./flash.sh auto2-gpgpu mmcblk0p1)
What steps do I need to do in order to check if my configuration is correct and to be able to recognize the external NVME drive with the Orin SoMs?
As I already told in previous post… the document already told you how to check whether ODMDATA is correct in each bit. Please check it by yourself first.
If ODMDATA is correct and all the patches mentioned by document is already added, then needs to check from hardware signal aspect.
The value will get changed during flash with the combination between config file and original bpmp dtb. Since you only convert the original dtb, the value could be mismatched.
It is okay to ignore that if runtime ODMDATA is expected value.
I’m making some progress but still not fully there yet. According to a few forum posts, I found a parameter in the dtsi file at /hardware/nvidia/soc/t23x/kernel-dts/tegra234-soc/tegra234-soc-pcie.dtsi.
I modified the 14180000 & 141e0000 controller nodes to be ‘status=okay’.
Now after full flashing, the dmesg log of the Orin is showing (more than I had before):
[ 6.539233] tegra194-pcie 14180000.pcie: Adding to iommu group 9
[ 6.551633] tegra194-pcie 14180000.pcie: Using GICv2m MSI allocator
[ 7.775072] tegra194-pcie 14180000.pcie: Using GICv2m MSI allocator
[ 7.796938] tegra194-pcie 14180000.pcie: host bridge /pcie@14180000 ranges:
[ 7.804122] tegra194-pcie 14180000.pcie: IO 0x0038100000…0x00381fffff
→ 0x0038100000
[ 7.812817] tegra194-pcie 14180000.pcie: MEM 0x2728000000…0x272fffffff
→ 0x0040000000
[ 7.829555] tegra194-pcie 14180000.pcie: MEM 0x2440000000…0x2727ffffff
→ 0x2440000000
[ 8.945332] tegra194-pcie 14180000.pcie: Phy link never came up
[ 8.951505] tegra194-pcie 14180000.pcie: PCI host bridge to bus 0000:00
Attached is the grep of dmesg “pcie” dmesg-grep-pcie.txt (9.5 KB)
It looks like none of the DT connector links are coming up, so I must be missing something.
How do I get the Phy link up for C0? I followed Step 1 I mentioned above, but unsure of how to complete Steps 2 & 3.
As what I keep telling in this post and previous post, what you are doing now is already done by many other users/partners before. The document has the sample already. Make sure you did that device tree change first…
Also, please also try other kind of NVMe during the test too. Sometimes it is pcie device side has special requirement to get link detection. What you are doing now is just general setup, it won’t do that kind of work.
For example, if you test 3 kinds of NVMe SSD, and 2 of them can work, then it means those general setup is correct and done. No need to repeat the checking why it cannot detect the rest one of the SSD.
I modified those 3 files (tegra234-p3737-pcie.dtsi, tegra234-mb1-bct-pinmux-p3701-0000-a04.dtsi, & tegra234-mb1-bct-gpio-p3701-0000-a04.dtsi) exactly according to the above document you showed. Is that all that is needed? It doesn’t show steps 2 and 3, so wanted to check.
I know others have had same issue in the forums, but did not see solutions (i.e. how to add the ‘pipe2uphy’ command or ‘reset-gpios’ command, for example).
When you said ‘Make sure you did that device tree change first’, what do you mean? Activate the C0 & C7 via the tegra234-soc-pcie.dtsi file, i.e. change the status to “okay” and not “disabled”?
We are using the same nvme m.2 SSDs that we used in our Xavier solution, without any issue there.
You already added pipe2uphy when you put those phys things into your kernel device tree.
And (3) is talking about if the “Tegra PCIe is operated in endpont mode”. This is not for your case… your Tegra PCIe is operated in root port mode so that it can work with your PCIe endpoint device (NVMe SSD)…
OK, so the tegra234-soc-pcie.dtsi doesn’t need to be updated? I wasn’t getting anything for tegra194-pcie 14180000 via dmesg before I made that change. I’ll try changing it back to “DISABLED”
So you are saying just performing those two steps above, the nvme ssd should be recognized on my pcie?
OK, so the tegra234-soc-pcie.dtsi doesn’t need to be updated? I wasn’t getting anything for tegra194-pcie 14180000 via dmesg before I made that change. I’ll try changing it back to “DISABLED”
It is just device tree syntax rules… I can have 100 locations to enable 14180000 different from tegra234-soc-pcie.dtsi … there is no rule to “must enable or disable it inside tegra234-soc-pcie.dtsi”.
14180000 is also enabled in the screenshot above. …
If you must use tegra234-soc-pcie.dtsi to make 14180000 enabled, then it probably gives a hint that the change you added to tegra234-p3737-pcie.dtsi totally not take effect…
So you are saying just performing those two steps above, the nvme ssd should be recognized on my pcie?
Ideally, if all the patches really take effect.
They should be compatible in both correct?
Not sure. No guarantee. Please always take things as different cases.