Dear NVidia Team
We have custom carrier board with the AGX Orin Module and an installed NVMe on the M.2 Key M socket. The rootfs is on the internal eMMC, the NVMe is formatted as ext4 filesystem.
After a power-off-on cycle, the NVMe is detected and working without any issues. However, if we then do a reboot via the “reboot” command in Ubuntu, the NVMe is not anymore detected (with lspci not visible). The used JetPack Version is 5.1.2.
Here the pcie port specific log for the working and non-working case:
log_ok.txt (1.6 KB)
log_nok.txt (1.0 KB)
After further investigation on our side, we found out that if we add the following line to the nv.sh script, the NVMe is always detected:
export 421 > /sys/class/gpio/export
This exports PL.01 which is the Signal PCI4_RESET_N that goes to the M.2 Key M (similar to the DevKit). Now we would like to know why exporting the GPIO to the SYSFS solves our issue. We did not change anything in the Pinmux file for this Pin.
Any ideas what could be the problem?
Just a question. If you put same PCIe with same module but with NV devkit, will you still reproduce same error?
Thank you for the fast response.
The behavior is the same on the NV devkit. After a reboot, the NVMe is not detected with lspci. Adding the export command to nv.sh, the NVMe is also detected after a reboot.
Do you have other kind of NVMe on your side that can do same test on devkit?
Yes we have other NVMe storages (different manufacturer and type) and with them, we do not see any errors. Still we would like to know why the export of the GPIO helps for this certain NVMe Type?
Could you share the model name of the nvme that would hit this issue?
We need to check this locally.
It is the following NVMe from Apacer: B92.935LHU.00104
Do you have any update for us?