@WayneWWW yes changing the boot order worked thank you.
Great, so with Jetpack4.6, would I conduct the JetsonHacks ./copy-rootfs-ssd.sh script to copy the emmc to the SSD, and then change the boot order so that the SSD is the priority? Basically there is no longer a need for the ./setup-service.sh script work around described here anymore?
Hi everyone,
I followed the steps and ran the scripts from “https://github.com/jetsonhacks/rootOnNVMe ” to switch the rootfs to SSD successfully.
But after few days, the rootfs is loading from SSD, but stuck at nvidia logo, able to access only through terminal mode using clt+alt+f4. GUI is not responding. Disabling the service and removing the file setssdroot.conf from /etc and rebooting boots up from sd card and shows ssd unmounted. When I try to enable the service back again it doesn’t work. Can anyone help me resolve this issue. I faced this couple of days back, i had to format the SSD and again copy back the rootfs and scripts to run properly which is time consuming to install stuff in SSD again. This happened again now and I m just wondering if there is any other way to get back to SSD without formatting.
I’ve used a Samsung 870 Evo 1TB SATA 2.5 SSD and the systemd method, so basically using the steps and scripts from https://github.com/jetsonhacks/rootOnNVMe to switch the rootfs to the SSD.
I’ve tested this on a Jetson Xavier NX running Jetpack 4.5 (L4T 32.5.0).
Maybe there’s some disk corruption going on somewhere.
Maybe you want to try my project out that installs a boot from SSD in one command?
If you find that you had disk corruption and you application doesn’t need to write to disk, then you can run it in a read-only mode by running sudo ./sbts-bin/make_readonly.sh and rebooting. Then it runs with a memory overlay over the rootfs and so the disks are not mounted read-write and thus don’t get corrupted.
As mentioned in a separate thread, I tried modifying cbo.dts but that caused the board no longer booting (using JP 4.6.1). Has anybody else tried modifying this file?
Kindly help any one in my Carrier board i am using jetson agx xavier 8Gb module. In this case i am using m. 2 nvme ssd its not detecting and also i used c5 pcie controller for m. 2 interface
Since JetPack 5.0.2 it’s possible to flash directly to the nVME. It’s a bit tricky, because documentation is hard to find, however… Just refer to the Linux_for_Tegra/tools/kernel_flash/README_initrd_flash.txt manual
Yep, I know, titles kernel_flash and README_initrd_flash may seems irrelevant to nVME flashing, but believe me, this is the way how it works
may I know when I can run the above command? is it after the image file is created, and I run the above command instead of using the SDK Manager flash the image to the SSD card? do I need to create any partition on the NVME drive, or the flashing will do it for me automatically?
You also need to change the num_sectors in the flash_l4t_nvme.xml according to your drive configuration. Basically, this is used to calculate the nVME size. There is two parameters in XML: num_sectors and sector_size. By multiplication this two values you will get an nVME total size in bytes. If value does not match your drive then drive space will be utilized incorrectly (partitions will be too small or too large).
Flash command use this XML file to partition your drive, so you don’t need to create anything manually. Just change the num_sectors
Since JetPack 5.0.2 the UEFI is used by default, and I’m not 100% sure that l4t_initrd_flash.sh will flash the UEFI firmware correctly. I assume that it should, but again I’m not sure.
Try to flash as is using only l4t_initrd_flash.sh. If device won’t boot then try to flash any image using flash.sh (or SDK manager) first, then re-try the l4t_initrd_flash.sh again.
But then, when I tried to run YOLOv5 code or trtexec from TensorRT and I get the ****NvRmMemInit failed**** error when reading CUDA device information.