TX2-NX module，on the NX’s Carrier_Board , pcie 0 used as SSD, pcie 1 used as SATA by a bridge-chip ，then SSD is lost. Key in lspci -v , the module can enumerate SSD memory controller，but kernel driver in use is none , not nvme . See the picture. Why ? Where is the problem ？What should we do ? Thank you !
Are you using a custom board here? Which kind of ssd are you using?
We are using Jetson_Xavier_NX_DevKit_Carrier_Board，not custom board.
The SSD is Lenovo SL700, its parameters are M.2 interface，nvme ，128GB.
The TX2-NX module can enumerate SSD memory controller，but kernel driver in use is none. The problem will arise, when pcie 1 is used as SATA by a bridge-chip, JMB582.
If we use NX module，that‘s OK.
What do you mean “The problem will arise, when pcie 1 is used as SATA by a bridge-chip,”?
So if you remove the SATA bridge then the SSD can work?
Yes. if we remove the SATA bridge then the SSD can work.
Can you share the dmesg of ssd working and non-working case?
Also, will SSD work if you connect other devices on the M.2 key E port? For example, a wifi card.
Dmesg have upload. Please check.
And,we only use NX DevKit’s wifi card, SSD can work .
We have no other devices on the M.2 key E port. So,we can’t do more.
Additionally，we can’t understand why module can enumerate SSD memory controller，but kernel driver in use is none.
dmesg_false.txt (64.8 KB)
dmesg_work.txt (61.6 KB)
There is an error from kernel in the false case.
[ 0.978257] nvme nvme0: Minimum device page size 134217728 too large for host (4096)
[ 0.978261] nvme nvme0: Removing after probe failure status: -19
We have set up SSD before this Q&A, according to NV’s document “NVIDIA Jetson Linux Driver Package Software Features”.
Key in: “sudo parted /dev/sdc mklabel gpt” “sudo parted /dev/sdc mkpart APP 0GB 128GB” “sudo mkfs.ext4 /dev/sdc1”
What should we do next ？
I don’t have an SSD to work on, but the typical next step would be to populate “
/dev/sdc1” with a root filesystem. Not sure what the usual way is to set up on a TX2 for an external device’s filesystem (it would be a minor edit of the
system.img, but with edits to
My goal is to put the file system on nvme, and it has been tested and can be successfully started from nvme. method:
- Burn the system to EMMC first,
sudo ./flash.sh jetson-xavier-nx-devkit-tx2-nx mmcblk0p1
Then use the “sync” to copy rootfs to nvme.
Finally, re burn the system
sudo ./flash.sh jetson-xavier-nx-devkit-tx2-nx nvme0n1p1
In this way, rootfs is transplanted to nvme and can be used normally. Later, I needed to add a 2TB mechanical hard disk, so I used pcie1 to SATA.
However, such devices often fail to start normally. The PCIe to SATA chip I use is jmb582
I want to know whether SATA cannot be connected to pcie1 of TX2 NX, because I have also operated on NX, and there is no such problem.Or whether there are places in the kernel and device tree that need to be modified.
The PCI of jmb582 is gen3x1, the PCI of TX2 NX is Gen2 1x1, and the PCI of NX is gen3 1x1
I can’t be of much use in this, but if one of the required drivers changed, then perhaps it is just a case of the other driver needing more time. From what you said it seems to say that this works “sometimes”. Is that correct? If it works “sometimes” with the mechanical disk, then it means that that the SATA over PCIe can work. Maybe the
nvme0 error is due to timing of various parts of software loading. I have no way to do anything but guess, but either it is signal quality or it is a timing issue (I don’t know what the
probe status -19 means in terms of what can cause this to show up).
@ygq12332189 So, what is the solution here finally?
NO. I have an idea. When the CPU starts, first pull up pcie 0, and after a delay of N ms, pull up pcie 1. I want to modify this N, how do I modify it?
And,we observed that the startup sequence of the two pcie is to start pcie 0 first, and then pcie 1. How to reverse it ?
We have observed a phenomenon, if pcie 0 also uses pcie to sata, then both pcie can work stably. If pcie 0 connects to SSD and pcie 1 to SATA, then the SSD may be lost .
Can you help us modify the startup sequence of pcie?
We expect to start pcie1 first and then pcie0.