L4t_initrd_flash.sh arguments usage and arguments explanation

First question is regarding the script itself. Many sources ask to connect nvme to Jetson and then run the script for flashing however in the official manual I see that nvme should be connected to the host machine. Could anyone confirm which is correct for the 6.2 jetpack?

https://docs.nvidia.com/jetson/archives/r35.4.1/DeveloperGuide/text/SD/FlashingSupport.html#TosetupaNVMedrivemanuallyforbooting

Second question is I can’t comprehend the last argument passed to the script. Which can be internal, external, nvme/sda. There is a readme which defines combinations of this and external device argument which results in different combinations of full/minimal filesystem in the external storage and internal. First thing I’m missing is that I have prepared the rootfs and how does script create full and minimal from it? On top that I don’t understand the difference when passing --external-device nvme and nvme as the last argument vs --external-device nvme and external as the last argument. Would be great if someone could elaborate on that.

And the last thing is what would be the difference between flash.sh and direct call to initrd flash? For the previous versions I believe flash.sh didn’t support the nvme flashing but now I am able to flash with sdkmanager which poses question on which script should one use.

Thank you!

Hi,

Quick answer to your question

  1. The most common way is put nvme to Jetson and then flash.
    You could put NVMe on host PC and then flash data to it too. However, this method only update the rootfs in nvme. It won’t flash the bootloader in the QSPI located on Jetson. This method is similar to preparing a boot medium and insert that medium to Jetson.

  2. Please just use “internal” as always. This parameter really does not play much role when using external.

First thing I’m missing is that I have prepared the rootfs and how does script create full and minimal from it?

I am not sure what is this question doing here. Sounds not related to any previous question at all. initrd flash and flash.sh do not create rootfs. Are you trying to boot from two different kinds of boot mediums?

  1. And the last thing is what would be the difference between flash.sh and direct call to initrd flash? For the previous versions I believe flash.sh didn’t support the nvme flashing but now I am able to flash with sdkmanager which poses question on which script should one use.

It is still true that flash.sh cannot flash external drive which means NVMe/USB SSD cannot be flashed by this method. Sdkmanager is using wrapping function which would use flash.sh when flashing emmc and when it comes to flash NVMe/USB, it would use initrd flash.

I am not sure what is this question doing here. Sounds not related to any previous question at all. initrd flash and flash.sh do not create rootfs. Are you trying to boot from two different kinds of boot mediums?

Don’t have access at the moment to the files to show you the exact content but if you take a look at “README_initrd_flash.txt” it contains table for the specified arguments and it mentions full/minimal filesystem as a result of the command.

You could ignore that too. If you just use internal, then you don’t need to care about that thing (minimal/full filesystem).
That one is not really in use either.

Case 2 and case 4 in the table are actually quite useless case. It would just let your kernel and rootfs get loaded from different places. And that one ideally should not be in use.

Could you please provide the specific commands to flash the Jetson Orin nx with nvme? We tried multiple ways and all of them has some issues. One notice rootfs is customized with multistrap without using samplefs and we found that nvme flashing via running Jetson depends on the set of packages. Maybe there is missing error message somewhere that causes approach (1) to fail.

  1. One approach we followed is
sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1 \
      -c tools/kernel_flash/flash_l4t_external.xml \
      -p "-c bootloader/generic/cfg/flash_t234_qspi.xml --no-systemimg" --network usb0 \
      jetson-orin-nx-devkit internal

And booting fails with:

mount /dev/nvme0n1p1 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/nvme0n1p1, missing codepage or helper program, or other error.

bash-5.1# blkid
/dev/nvme0n1p9: PARTLABEL="recovery-dtb" PARTUUID="4532316b-0249-45cb-83bc-a96893bd4935"
/dev/nvme0n1p11: PARTLABEL="recovery_alt" PARTUUID="2d23e4d0-e03a-4c7e-b108-b86c1d3b5251"
/dev/nvme0n1p7: PARTLABEL="B_reserved_on_user" PARTUUID="7be93f38-b404-4119-bb02-f340088f9906"
/dev/nvme0n1p5: PARTLABEL="B_kernel" PARTUUID="534a00b9-7d56-4765-9708-191b518c7c69"
/dev/nvme0n1p3: PARTLABEL="A_kernel-dtb" PARTUUID="501059a1-47e5-4c41-a759-2168a29f9a5f"
/dev/nvme0n1p1: PARTLABEL="APP" PARTUUID="f6c65506-7c08-451a-96a8-efd96c4977ce"
/dev/nvme0n1p14: PARTLABEL="UDA" PARTUUID="74fc394d-6ea8-4ab9-b61c-444a8578e570"
/dev/nvme0n1p12: PARTLABEL="recovery-dtb_alt" PARTUUID="2cfd2a38-3724-4408-80fd-a276f12a4700"
/dev/nvme0n1p8: PARTLABEL="recovery" PARTUUID="706fa3e2-5d7a-489c-b466-e57014613961"
/dev/nvme0n1p10: UUID="20E1-CF19" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="esp" PARTUUID="3d0d0fad-9ceb-40e2-9386-5d244e691d0d"
/dev/nvme0n1p6: PARTLABEL="B_kernel-dtb" PARTUUID="1bbd2153-33b5-4340-a5b7-f54069461953"
/dev/nvme0n1p4: PARTLABEL="A_reserved_on_user" PARTUUID="242e564d-49fd-407e-b32e-5047e3c15025"
/dev/nvme0n1p2: PARTLABEL="A_kernel" PARTUUID="70178c28-0d63-4742-a7e0-3848c0c8136f"
/dev/nvme0n1p15: PARTLABEL="reserved" PARTUUID="5bd322ad-1e71-4b37-8e07-7f778fc6424c"
/dev/nvme0n1p13: PARTLABEL="esp_alt" PARTUUID="5b6db47a-0597-41bc-83b7-c369cdd52a77"
bash-5.1# ls /mnt
bash-5.1# mount /dev/nvme0
nvme0       nvme0n1p10  nvme0n1p13  nvme0n1p2   nvme0n1p5   nvme0n1p8
nvme0n1     nvme0n1p11  nvme0n1p14  nvme0n1p3   nvme0n1p6   nvme0n1p9
nvme0n1p1   nvme0n1p12  nvme0n1p15  nvme0n1p4   nvme0n1p7   
bash-5.1# mount /dev/nvme0n1p1 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/nvme0n1p1, missing codepage or helper program, or other error.

PC also sees corrupted filesystem. However partuuid with this approach is correct.

  1. Second approach is following the official doc on preparing SSD on local machine and then updating the bootloader:
flash.sh jetson-orin-nx-devkit-nvme nvme0n1p1

And this command flashes bootloader to nvme overwriting the proper rootfs we flashed from the pc instead of updating bootloader in qspi. But that’s what the doc says.

  1. Use internal for the flash.sh:
flash.sh jetson-orin-nx-devkit-nvme internal 

Flashes bootloader but uuid of the app partition doesn’t match the flashed uuid by initrd_flash with --direct option.

Hi,

None of the info you provided really matters. Just one quick question first.

Which jetpack release are you using here? Is that Jetpack6.2? or Jetpack5? I saw you are reading a jetpack5 document but your comment also mentioned Jetpack6.2. It conflicts.

flash.sh cannot flash nvme. Doing that will just give you empty nvme. No need to try that one. Only initrd flash is the reliable way to flash external drive.

Yes, that was my mistake. Jetpack 6.2: Flashing Support — NVIDIA Jetson Linux Developer Guide 1 documentation.

Section Manually Setting Up an NVMe Drive for Booting refers to initrd_flash for flashing from the host.

And the very next section “Setting Up an NVMe Drive Manually to Use as Root File System” says to call flash.sh.

Btw, no examples on how to call initrd_flash with connected Jetson in that doc.

please read this one. The first section already provided.

https://docs.nvidia.com/jetson/archives/r36.4.3/DeveloperGuide/IN/QuickStart.html

This one is taken from readme for initrd_flash. However originally it specifies external and we tried both. Still no luck

Just tried. Same error as with this command:

sudo ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1 \
      -c tools/kernel_flash/flash_l4t_external.xml \
      -p "-c bootloader/generic/cfg/flash_t234_qspi.xml --no-systemimg" --network usb0 \
      jetson-orin-nx-devkit internal
mount /dev/nvme0n1p1 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/nvme0n1p1, missing codepage or helper program, or other error.

And partition doesn’t have a valid filesystem.

Please share full log. Your issue must be with something else.

Also, just to clarify. Is this a flash failure issue or boot failure issue? Or you are not sure what I mean here?

Pretty sure it’s flash issue. After flashing app partition has no valid filesystem.

Sharing the latest build log. Added “file” package to the rootfs and now the same command fails to flash. You can see error:

“Failed to mount APP partition. mount must be superuser”

fails-with-mount.txt (339.4 KB)

Hi,

Just to clarify. I guess even sdkmanager cannot flash the nvme on the device on your side. Could you confirm that too?

This should have nothing to do with the commands you are using.

Also, the error log is more related to this.

[ 255]: l4t_flash_from_kernel: The device size indicated in the partition layout xml is smaller than the actual size. This utility will try to fix the GPT.
[ 255]: l4t_flash_from_kernel: Error flashing external device
Flash failure
Either the device cannot mount the NFS server on the host or a flash command has failed. Check your network setting (VPN, firewall,...) to make sure the device can mount NFS server. Debug log saved to /tmp/tmp.ErhRCyeV8E. You can access the target's terminal through "sshpass -p root ssh root@fc00:1:1:0::2"

Did you format nvme to ext4 before doing flash?

We had managed to flash once with sdkmanager but the original Nvidia rootfs. After this switched to flashing our own image

Not intentionally. There are some partitions left from the previous flash attempt.

Could you validate the sdkmanager thing on same host PC again?

If sdkmanager can flash nvme directly on this host PC, then it is not related to the NFS thing mentioned here.

Either the device cannot mount the NFS server on the host or a flash command has failed. Check your network setting (VPN, firewall,…) to make sure the device can mount NFS server. Debug log saved to /tmp/tmp.ErhRCyeV8E. You can access the target’s terminal through “sshpass -p root ssh root@fc00:1:1:0::2”

We added 3 packages: file, nfs-common, cifs-utils. Started to get that error