Is it possible to set up 5.0.2 boot from nvme by only flashing eMMC first?

I understand this, but the readme document is too vague. It does explain that I am supposed to provide the name of the host-attached device:

<extdev_on_host> is the external device /dev node name as it appears on the host. For examples,
if you plug in a USB on your PC, and it appears as /dev/sdb, then <exdev_on_host> will be sdb

This implies that it’s going to do something to touch <exdev_on_host>, which it clearly does not, because it failed with the output I have already posted above. It’s possible that I’m reading the tea leaves incorrectly here, but notice the rest of the output from this point make no mention of even attempting to write to sdn:

*** no-flash flag enabled. Exiting now... ***
Save initrd flashing command parameters to /home/slu/iai_data/2022-12-15/Linux_for_Tegra/tools/kernel_flash/initrdflashparam.txt
/tmp/tmp.6eLqOFV2dr /home/slu/iai_data/2022-12-15/Linux_for_Tegra
writing boot image config in bootimg.cfg
extracting kernel in zImage
extracting ramdisk in initrd.img
/tmp/tmp.6eLqOFV2dr/initrd /tmp/tmp.6eLqOFV2dr /home/slu/iai_data/2022-12-15/Linux_for_Tegra
53066 blocks
Cleaning up...

Perhaps whatever command to actually write to the exdev_on_host specified never made it into the script. Such a command would contain the critical information of which image which may or may not have been prepared should be used and what the process is to be used to write it to the host-attached device.

I hope you can agree with me that the only reasonable interpretation of this output i’ve shared (and let me know if more of the output from higher up in the log may be of relevance) is that the script terminated prematurely due to the --no-flash flag being provided to flash.sh. It would not be reasonable for me to start going in and hacking at these scripts. It seems like there needs to be a better release process for these packages as well as more complete documentation. These readmes are appreciated but they are not in-depth enough.

There are more steps that I have planned to test these out. At the time I was testing with a USB 3.0 external adapter for NVMe but I suspect it may not work compared to actually having the NVMe device properly attached to the computer as a /dev/nvmeXn1 device.

Summary of where we stand now:

  • As confirmed by @seeky15 there is no way to use the 5.0.2 BSP package download flashing scripts to prepare a Xavier NX eMMC SoM flashed into a state where it can auto boot to NVMe (to be clear, no serial port shenanigans, no manual step to configure UEFI interactively, no rebuilding UEFI and installing it somehow, or other workarounds not suitable for factory production). However I have confirmed that flashing 5.0.2 (rev.1) with SDK Manager 1.9.0_10816_amd64.deb from a Ubuntu 20.04.5 LTS amd64 linux machine using default settings puts it into such a state. Therefore, all we need is a workable massflash workflow that delivers this flashed state onto our SoMs.

  • Also, the above indicates that there may be some sort of change between what SDK Manager refers to as 5.0.2 (rev.1) and the 5.0.2 BSP download. Can you check internally to see if this is the case and where we can get the updated 5.0.2 (rev.1) BSP package download?

  • On the NVMe side, the only way I’ve been able to prepare an NVMe drive in such a way that it can be booted to from a 5.0.2 Xavier NX eMMC SoM is by attempting flashing 5.0.2 (rev.1) with SDK Manager 1.9.0_10816_amd64.deb from a Ubuntu 20.04.5 LTS amd64 linux machine using default settings with the one exception of also having the NVMe SSD installed in the flashing board and choosing to target NVMe during the flash step. This leads to a failed flash result, but the SSD is left in a good bootable state. I am currently exploring ways to clone this NVMe device to develop a way for us to image these disks for internal development and proceeding on to production. If there is some massflash-related way to prep these NVMe disks as an alternative to cloning from a known-good “flashed” NVMe, that would also be appreciated, but so far it looks like imaging and cloning the disk will be the way to go.

May I ask why this is not suitable for factory production?
You could pre-build an modified UEFI image with default boot order set to NVMe. (Initrd flash boot order - #26 by WayneWWW)
Just replace UEFI image in Linux_for_Tegra/bootloader/uefi_jetson.bin.

They should be the same. You could use the downloaded Jetpack SDK from SDKM. The path is as following indicated in SDKM.

Have you tried Workflow 7: Initrd Massflash for massflash?
and what’s your result of Workflow 10, does the following issue still exist?

l4t_initrd_flash_internal.sh: line 735: Linux_for_Tegra/tools/kernel_flash/initrdflashimgmap.txt: No such file or directory

@unphased

Sorry that this thread is too long. Could you just list out a brief list of your current questions?

It is not seemingly suitable because it is not clear what it entails.

From that linked topic:

It looks like dtc can be used to follow the first sentence.
The second sentence

add dtbo name to your board config file

is highly unclear. Looking at all of the config files I can see for example from jetson-xavier-maxn.conf an env var

OVERLAY_DTB_FILE="${OVERLAY_DTB_FILE},tegra194-p2888-0005-overlay.dtbo";

xavier nx config files do not have any OVERLAY_DTB_FILE env var. They only have a DTB_FILE=tegra194-p3668-0001-p3509-0000.dtb env var.

The third sentence

Reflash the board

brings us to the present issue where flashing from the BSP package I’ve prepped hasn’t been able to succeed (see discussion above). I would want to get this working in some way before attempting the UEFI rebuild and dtb changes for it.

OK so it looks like after some searching I’ve got a rough idea of how to go down this UEFI path. So I can accept this now as likely viable for production rollout, I take that back then.

But what makes no sense at all, and still calls everything into question, is why my “failed” SDKM derived 5.0.2 NVMe disk, combined with my vanilla 5.0.2 (rev.1) SDKM flashed SoM, magically seamlessly boots to NVMe all by itself without manual intervention. Ever since I’ve confirmed this behavior it’s cast so much doubt on everything else. Why should I bother to rebuild UEFI just to get the updated version if 5.0.2(rev.1) via SDKM 1.9.0 provides the required functionality all by itself?


I think my next step will be to use the SDK/BSP directory that was actually prepared by SDKM to do Workflow 7 experiments to try to get a quick path toward massflash. I am willing to accept that although there exists a way to flash the SoM with a way to auto boot NVMe, that I might still never be able find out why it works, given the existence of discussions such as the aforementioned topic, Initrd flash boot order, in which it is clearly stated that 5.0.2 does not come with a way to auto boot to NVMe without deep modifications like the UEFI one above.

I tried to do this already, but I will try that again after doing some more critical experiments that should shed more light on the real issues. In particular I need to test more of the initrd flash script examples from the readme from the SDK Manager provided SDK directory. It is possible that my earlier tests were conducted with a somehow improperly-set-up BSP package. I found it unusual that nothing I attempted with it could succeed.

Please allow me a few days for testing more thoroughly so that this discussion can be more productive. Thanks.

1 Like

Have you tried this?
This command would not flash into NVMe (so you need to use l4t_initrd_flash.sh first to flash NVMe), but it might help you mount NVMe as rootfs. ( Flashing Support — To set up an NVMe drive manually for using as root file system)

You could also help to provide the steps you did and the flash logs from SDKM for further check.

Capture SDKM log: upper right corner “…” → Export Debug logs

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.

do not forget t disable the service
$ systemctl stop udisks2.service
before flashing to nvme

1 Like