How to Boot from NVMe SSD?

Not sure for Xavier, but IIRC for TX2 I had to build a kernel with CONFIG_PCI_TEGRA=y (and CONFIG_BLK_DEV_NVME=y) instead of modules for booting a Linux kernel mounting its rootfs from NVME SSD (It was using mmc0 for early boot and extlinux.conf for kernel args and image, though).
Has anyone tried that ?

Dear all,

How to boot AGX Xavier that just came out from the box?
Is it need sdcard to boot?
When I put AGX Xavier with a HDMI screen, a keyboard and a mouse then power it up, it does not appear anything on the screen. Keyboard caps lock does not on/ off.

I have followed https://developer.download.nvidia.com/assets/embedded/secure/jetson/xavier/docs/jetson_agx_xavier_developer_kit_user_guide.pdf

How to Install JetPack
Installing JetPack to your developer kit requires these steps, which are detailed in the
sections below:
 Download JetPack installer to your Linux host computer.
 Connect your developer kit to the Linux host computer.
 Put your developer kit into Force Recovery Mode.
 Run JetPack installer to select and install desired components.
Download Installer to the Host Computer
You must have a Linux host computer to run JetPack installer and flash the developer
kit. Supported host operating systems are:
 Ubuntu Linux x64 Version 18.04 or 16.04
Download the latest JetPack installer to the Linux host from the JetPack page on the
Jetson Developer Site.
Note: The installer can flash and update software on a target Jetson device, but it
cannot not run directly on that device. Whether or not a Jetson device is
present, you can use JetPack installer to update software on the Linux host.
Connect Developer Kit to the Host Computer
 Use the included USB cable to connect the Linux host computer to the front USB
Type-C connector on the developer kit.
 Connect a display, keyboard, and mouse to your Jetson AGX Xavier Developer Kit
(see Physical Configuration Instructions, above).
 Connect the developer kit and Linux host computer to the same network.
 Connect the included AC adapter to the developer kit’s DC jack.
Put Developer Kit into Force Recovery Mode
The developer kit must be in Force USB Recovery Mode (RCM) so the installer can
transfer system software to the Jetson AGX Xavier module.
1. Connect the developer kit as described above. It should be powered off.
2. Press and hold down the Force Recovery button.
3. Press and hold down the Power button.
4. Release both buttons.
Run the Installer
JetPack installer includes a Component Manager, allowing you to choose what to install
on the Linux host computer and/or the Jetson Developer Kit.

I have run the SDK Manager on my ubuntu 18.04 host, at step 03, it gives:

Could not connect to the target. Verify that:
1. The device is connected to this host machine with a USB cable.
2. Ubuntu 'System configuration wizard' is completed on the device. (How do I make this one?)
3. Jetson's Ubuntu OS is up and running (I cannot boot the AGX Xavier).
When ready, click 'Flash' to continue.

I assume the above error message that I can boot the AGX Xavier and it ready to receive flash from SDM Manager?

Please help me. Thank you very much in advance.

Warmest regards,
Suryadi

No SD card needed for boot (and probably should be avoided while flashing).

When an Xavier is in recovery mode with the correct USB to host, then the following will show something:

lsusb -d 0955:7019

(0955 is the NVIDIA USB registry manufacturer ID, and 7019 is the Xavier product ID)

Basically anything which brings up power or cycles power while the recovery button is held down will put the Xavier in recovery mode. At this point the Xavier is a custom USB device and the driver package running on the host PC will be able to work with the device. People using a VM usually have problems due to the way USB is passed through, so that would be the first thing to ask: Is this a VM? I am going to guess “no” since even a VM usually sees the device at the start.

Hello,
It maybe not the answer you all want, but I’d like to share my solution of this problem.
Since it seems it is difficult to mount SSD partition for booting a Xavier device,
I took a way to mount the original filesystem on eMMC as readonly and overlay another read-writable filesystem located on NVMe SSD.
Although it slows down read/write speed a bit, the volume of system filesystem is expanded to the volume of SSD (for my Xavier, it is 1TB.).

Please have a look at my Github repo, it includes all script files and one-liner script to setup the system I proposed.

Hope it will help you!

P.S.

The script I posted above actually has one problem on shutdown, where unmounting the overlayed partition fails (though it works without any problem as for now).
For anyone who has any advice or answer, please create a pull request to my repo or just posting to this thread.
It’d definitely help all in the world! ;)

Hi,
I have modified furushchev’s solution in “true” systemd way and become able to mount SSD drive as rootfs without overlay and reboot problems.

  1. Xavier was flashed and software was installed using SDKManager.
  2. Rootfs from eMMC was copied to SSD ( with one ext4 partition in my case):
    #!/bin/bash
    sudo mount /dev/nvme0n1p1 /mnt
    sudo rsync -aAXv / --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /mnt
    
  3. Then we need systemd unit, loading just after unmounting initrd image and mounting real rootfs, before any other targets running with systemd from real rootfs. Something like following: setssdroot.service:
    [Unit]
    Description=Change rootfs to SSD in M.2 key M slot (nvme0n1p1)
    DefaultDependencies=no
    Conflicts=shutdown.target
    After=systemd-remount-fs.service
    Before=local-fs-pre.target local-fs.target shutdown.target
    Wants=local-fs-pre.target
    ConditionPathExists=/dev/nvme0n1p1
    ConditionVirtualization=!container
    
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/sbin/setssdroot.sh
    
    [Install]
    WantedBy=default.target
    

    I added condition to start only if SSD is installed ( /dev/nvme0n1p1 exists).

    setssdroot.sh:

    #!/bin/sh
    NVME_DRIVE="/dev/nvme0n1p1"
    CHROOT_PATH="/nvmeroot"
    
    INITBIN=/lib/systemd/systemd
    EXT4_OPT="-o defaults -o errors=remount-ro -o discard"
    
    modprobe ext4
    #modprobe fuse
    
    mkdir -p ${CHROOT_PATH}
    mount -t ext4 ${EXT4_OPT} ${NVME_DRIVE} ${CHROOT_PATH}
    
    cd ${CHROOT_PATH}
    /bin/systemctl --no-block switch-root ${CHROOT_PATH}
    
  4. New target was installed with following command:
    #!/bin/sh
    sudo cp setssdroot.service /etc/systemd/system
    sudo cp setssdroot.sh /sbin
    sudo chmod 777 /sbin/setssdroot.sh
    systemctl daemon-reload
    sudo systemctl enable setssdroot.service
    

As a result, after reboot systemd mounts /dev/nvme0n1p1 as “/”. We are working with system (except kernel and initrd) from SSD and have “recovery” system on eMMC.

2 Likes

If rootfs from SSD is mounted, you will see SD drive on desktop (Ubuntu gives SD icon to eMMC).
systemd-analize result attached.

Now we need to find how to load new kernel from SSD with flashed kernel’s kexec :).

it is really works :)
Thank a lot , @crazy_yorick

3 posts were split to a new topic: Does the Nvidia Xavier support NVME GEN4 SSD?

Can some one from Nvidia please confirm/provide the supported way of doing this?

Thank you.

After reflashing to JetPack 4.4 DP I had time to play with booting from SSD again and here are some remarks. Because forum fobids sharing archives with scripts, formated text will be included.

  1. I added conditional start for service on /etc/setssdroot.conf (on mmcblk0p1) availability. To mount rootfs from eMMC, file /etc/setssdroot.conf on mmcblk0p1 should be deleted.
    setssdroot.service:

[Unit]
Description=Change rootfs to SSD in M.2 key M slot (nvme0n1p1)
DefaultDependencies=no
Conflicts=shutdown.target
After=systemd-remount-fs.service
Before=local-fs-pre.target local-fs.target shutdown.target
Wants=local-fs-pre.target
ConditionPathExists=/dev/nvme0n1p1
ConditionPathExists=/etc/setssdroot.conf
ConditionVirtualization=!container
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/setssdroot.sh
[Install]
WantedBy=default.target

  1. setssdroot.service and setssdroot.sh should be on both mmcblk0p1 and nvme0n1p1 else service report error to journal, still switching rootfs.

  2. At JetPack 4.4 propoused to install new kernels and modules with deb packet. One should remember, that after installation, kernel, initrd and modules to rootfs switched to SSD would be placed on /boot/* and /lib/modules/[kernel name]/ on SSD and should manualy be copied to same folders on mmcblk0p1. Because Image and initrd, located in mmcblk0p1 /boot folder, pointed by /boot/extlinux/extlinux.conf data in mmcblk0p1, is booting with CBoot. If modules version in initrd on mmcblk0p1 and /lib/modules/[kernel name]/ on SSD will be different, modules load will be tainted.

So, step by step guid will be the following:

  • Flash Xavier with rootfs on eMMC (default with SDKManager)

  • Install default software, that will be needed on main (SSD) and restoration (eMMC) systems.

  • Copy rootfs to SSD.

#!/bin/bash
sudo mount /dev/nvme0n1p1 /mnt
sudo rsync -aAXv / --exclude={"/dev/","/proc/","/sys/","/tmp/","/run/","/mnt/","/media/*","/lost+found"} /mnt

  • Install service (steps 3-5 from my previous post, setssdroot.service from top of this post).

  • Copy service files to SSD

sudo cp /etc/systemd/system/setssdroot.service /mnt/etc/systemd/system/setssdroot.service
sudo cp /sbin/setssdroot.sh /mnt/sbin/setssdroot.sh

  • Install software for restoration rootfs (on eMMC). It is possible to reboot back in this rootfs if /etc/setssdroot.conf file is absent.

  • Create file /etc/setssdroot.conf

sudo touch /etc/setssdroot.conf

  • Reboot system. You will boot in new rootfs located in SSD.
    Don’t forget to synchronize /boot/ /lib/modules/ folders with eMMC when moding kernels.

When rootfs on SSD is used, /dev/mmcblk0 is unmounted, so we can backup it with dd.
Rootfs on SDD can be safely backed after loading back into rootfs on eMMC:

sudo mount /dev/mmcblk0p1 /mnt
sudo rm /mnt/etc/setssdroot.conf
reboot

For backups I use SSD drive connected to M.2 key E throught M.2 key E to m.2 key M adapter from aliexpress.

2 Likes

There is another method to mount rootfs on SSD in JetPack 4.4. Now initrd is located with kernel image in /boot/ folder on /dev/mmcblk0p1. Loaded initrd is set in INITRD initrd_file line in /boot/extlinux/extlinux.conf on /dev/mmcblk0p1.

LABEL primary
         MENU LABEL primary kernel
         LINUX /boot/Image
         INITRD /boot/initrd

It’s possible to unpack initrd:

#!/bin/bash
sudo mkdir newinitrd
cd newinitrd
sudo cp /boot/initrd …/initrd
gzip -cd …/initrd | cpio -imd

Now we can change script init to change variable rootdev.
My test variant of init:

#!/bin/bash
initrd_dir=/mnt/initrd;
dhclient_flag=“true”;
count=0;

echo “Starting L4T initial RAM disk” > /dev/kmsg;

#Mount procfs, devfs, sysfs and debugfs
mount -t proc proc /proc
if [ ? -ne 0 ]; then echo "ERROR: mounting proc fail..." > /dev/kmsg; exec /bin/bash; fi; mount -t devtmpfs none /dev if [ ? -ne 0 ]; then
echo “ERROR: mounting dev fail…” > /dev/kmsg;
exec /bin/bash;
fi;
mount -t sysfs sysfs /sys
if [ ? -ne 0 ]; then echo "ERROR: mounting sys fail..." > /dev/kmsg; exec /bin/bash; fi; mount -t debugfs none /sys/kernel/debug/ if [ ? -ne 0 ]; then
echo “ERROR: mounting debugfs fail…” > /dev/kmsg;
exec /bin/bash;
fi;

create reboot command based on sysrq-trigger

if [ -e “/proc/sysrq-trigger” ]; then
echo -e “#!/bin/bash \necho b > /proc/sysrq-trigger;” > /sbin/reboot;
chmod 755 /sbin/reboot;
fi;

ssdroot parameter at extlinux.conf

rootdev="(sed -ne 's/.*\bssdroot=\/dev\/\([abcdefklmnps0-9]*\)\b.*/\1/p' < /proc/cmdline)" if [ "{rootdev}" != “” ]; then
echo “Root from sddroot parameter found: {rootdev}" > /dev/kmsg; fi if [ "{rootdev}” == “” ]; then

real root parameter

rootdev="(sed -ne 's/.*\broot=\/dev\/\([abcdefklmnps0-9]*\)\b.*/\1/p' < /proc/cmdline)" fi if [ "{rootdev}" == “” ]; then
uuid_regex=’[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}’
rootdev="(sed -ne "s/.*\broot=\(PARTUUID={uuid_regex})\b.*/\1/p" < /proc/cmdline)"
fi

if [ “{rootdev}" != "" ]; then echo "Root device found: {rootdev}” > /dev/kmsg;
fi

added branch for nvme

if [[ “{rootdev}" == nvme* ]]; then if [ ! -e "/dev/{rootdev}” ]; then
count=0;
while [ ${count} -lt 50 ]
do
sleep 0.2;
count=expr $count + 1;
if [ -e “/dev/{rootdev}" ]; then break; fi done fi if [ -e "/dev/{rootdev}” ]; then
echo “Found dev node: /dev/{rootdev}" > /dev/kmsg; else echo "ERROR: {rootdev} not found” > /dev/kmsg;
exec /bin/bash;
fi
mount /dev/{rootdev} /mnt/; if [ ? -ne 0 ]; then
echo “ERROR: {rootdev} mount fail..." > /dev/kmsg; exec /bin/bash; fi; elif [[ "{rootdev}” == PARTUUID* ]]; then
count=0;
while [ {count} -lt 50 ]; do sleep 0.2; count="(expr ${count} + 1)"

  mount "${rootdev}" /mnt/;
  if [ $? -eq 0 ]; then
  	break;
  fi

done
mountpoint /mnt/;
if [ ? -ne 0 ]; then echo "ERROR: {rootdev} mount fail…" > /dev/kmsg;
exec /bin/bash;
fi;
elif [[ “{rootdev}" == mmcblk* ]]; then if [ ! -e "/dev/{rootdev}” ]; then
count=0;
while [ ${count} -lt 50 ]
do
sleep 0.2;
count=expr $count + 1;
if [ -e “/dev/{rootdev}" ]; then break; fi done fi if [ -e "/dev/{rootdev}” ]; then
echo “Found dev node: /dev/{rootdev}" > /dev/kmsg; else echo "ERROR: {rootdev} not found” > /dev/kmsg;
exec /bin/bash;
fi
mount /dev/{rootdev} /mnt/; if [ ? -ne 0 ]; then
echo “ERROR: {rootdev} mount fail..." > /dev/kmsg; exec /bin/bash; fi; elif [[ "{rootdev}” == sd* ]]; then
if [ ! -e “/dev/{rootdev}" ]; then while [ {count} -lt 50 ]
do
sleep 0.2;
count=expr $count + 1;
if [ -e “/dev/{rootdev}" ]; then break; fi done fi if [ -e "/dev/{rootdev}” ]; then
echo “Found dev node: /dev/{rootdev}" > /dev/kmsg; else echo "ERROR: {rootdev} not found” > /dev/kmsg;
exec /bin/bash;
fi
mount /dev/{rootdev} /mnt/; if [ ? -ne 0 ]; then
echo “ERROR: {rootdev} mount fail..." > /dev/kmsg; exec /bin/bash; fi; elif [[ "{rootdev}” == “nfs” ]]; then
eth=cat /proc/cmdline | sed 's/.* ip=\([a-z0-9.:]*\) .*/\1/' | awk -F ":" '{print $6}';
echo “Ethernet interface: $eth” > /dev/kmsg;
ipaddr=ifconfig "$eth" | grep -A1 "$eth" | grep "inet addr" | sed 's/.*addr:\([0-9\.]*\) .*/\1/';
if [[ “$ipaddr” =~ [0-9].[0-9].[0-9].[0-9] ]]; then
echo “IP Address: ipaddr" > /dev/kmsg; dhclient_flag="false"; else while [ {count} -lt 50 ]
do
sleep 0.2;
ipaddr=ifconfig "$eth" | grep -A1 "$eth" | grep "inet addr" | sed 's/.*addr:\([0-9\.]*\) .*/\1/';
if [[ “$ipaddr” =~ [0-9].[0-9].[0-9].[0-9] ]]; then
echo “IP Address: $ipaddr” > /dev/kmsg;
dhclient_flag=“false”;
break;
fi
count=expr $count + 1;
done
fi
if [ “$dhclient_flag” == “true” ]; then
timeout 8s /sbin/dhclient eth; if [ ? -ne 0 ]; then
echo “ERROR: dhclient fail…” > /dev/kmsg;
exec /bin/bash;
fi;
fi;
nfsroot_path=”cat /proc/cmdline | sed -e 's/.*nfsroot=\([^ ,]*\)[ ,].*/\1 /'”;
nfsroot_opts="cat /proc/cmdline | sed -ne 's/.*nfsroot=\([^ ,]*\),\([^ ]*\).*/\2 /p'";
if [[ "{nfsroot_opts}" == "" ]]; then nfsroot_opts="nolock" fi mount -t nfs -o {nfsroot_opts} {nfsroot_path} /mnt/ &>/dev/kmsg; if [ ? -ne 0 ]; then
echo “ERROR: NFS mount fail…” > /dev/kmsg;
exec /bin/bash;
fi;
else
echo “No root-device: Mount failed” > /dev/kmsg;
exec /bin/bash;
fi

echo “Rootfs mounted over ${rootdev}” > /dev/kmsg;
mount -o bind /proc /mnt/proc;
mount -o bind /sys /mnt/sys;
mount -o bind /dev/ /mnt/dev;
cd /mnt;
cp /etc/resolv.conf etc/resolv.conf

echo “Switching from initrd to actual rootfs” > /dev/kmsg;
mount --move . /
exec chroot . /sbin/init 2;

And repack initrd:

#!/bin/bash
find . -print0 | cpio --null --quiet -H newc -o | gzip -9 -n > …/initrdtossd
sudo cp …/initrdtossd /boot/initrdtossd

Now we need to change extlinux.conf to primary boot option and create secondary!!! (uncomment at end of the file).

LABEL primary
         MENU LABEL primary kernel
         LINUX /boot/Image
         INITRD /boot/initrdtossd
         APPEND {bootargs} ssdroot=/dev/nvme0n1p1 

!!!
This method is only propoused. I couldn’t make it work yet.
rootdev variable is empty at highlighted line from init, while /proc/cmdline consists string ssdroot=/dev/nvme0n1p1 and /dev/nvme0n1p1 is visible when system boot is stopped in initrd:

rootdev="$(sed -ne ‘s/.\bssdroot=/dev/([abcdefklmnps0-9])\b.*/\1/p’ < /proc/cmdline)"

Any advice?

Just wanted to say, ty so much @crazy_yorick for your work on this thus far. I have one of the new Xavier NX dev kits, and I’m very interested in having the root of the filesystem be on my extremely fast m2 nvme, rather than my extremely slow sd card. Does anyone know if this has been tried on a Xavier NX?

Hello, I would also be very interested, as @erikneffca, to learn how to move the root to an SSD drive of Xavier NX. As soon as I start installing things (e.g. jetson-inference Python library that requires PyTorch), SD card gets saturated. I am new to Jetson community so I tried to apply same procedures that were suggested for other Jetson modules to boot from SSD. However, it seems that the bootloader on NX is different and it didn’t work for me.

Another approach would be to extend root logical volume space with SSD drive, something which is quite straight forward on standard Linux distributions (create group of physical partitions and use LVM to define a volume), but I couldn’t succeed in it with Jetson NX neither.

I would be surprising if NVIDIA expects industrial users to be able to embed all software on a tiny SD card or 16 Gb eMMC. I’ve been working with many ARM and x86 embedded systems and it was always easy to deploy with an SSD (SATA or PCIe). Maybe NVIDIA experts can suggest something as I would expect them to have brainstormed on this.

Have anyone found a solution for this?

I can confirm that the first method does work on Xavier NX Dev Kit - i made it running just yesterday on my NX.

You need to check and adapt the name of the NVMe device (iirc it is /dev/nvme0n1 instead of /dev/nvme0n1p1)

1 Like

@dkreutz thx for confirming - I’ll give it a try!

Hello,
I’ve found how to mount SSD as rootfs with extlinux.conf APPEND, but in still requires initrd modification (modification of this method).

Main problem: initrd created by JetPack 4.4 has init script, that search for root device first as device name format, then as PARTUUID format. Then only “PARTUUID*”, “mmcblk*”, “sd*”, “nfs” are acepted as legal rootdev string.
init[62…173]:

rootdev="(sed -ne 's/.*\broot=\/dev\/\([abcdefklmnps0-9]*\)\b.*/\1/p' < /proc/cmdline)" if [ "{rootdev}" == “” ]; then
uuid_regex=’[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}’
rootdev="(sed -ne "s/.*\broot=\(PARTUUID={uuid_regex})\b.*/\1/p" < /proc/cmdline)"
fi

if [ “{rootdev}" != "" ]; then echo "Root device found: {rootdev}” > /dev/kmsg;
fi
if [[ “{rootdev}" == PARTUUID* ]]; then ------- elif [[ "{rootdev}” == mmcblk* ]]; then
-------
elif [[ “{rootdev}" == sd* ]]; then ------- elif [[ "{rootdev}” == “nfs” ]]; then
-------
else
echo “No root-device: Mount failed” > /dev/kmsg;
exec /bin/bash;
fi

Working second method step by step:

  1. Flash device (Xavier AGX or NX (tested on AGX only)) with JetPack 4.4.
  2. Unpack initrd:
    Create directory in user folder. Create there 2 scripts:
    unpack.sh:

#!/bin/bash
gzip -cd …/initrd | cpio -imd

pack.sh:

#!/bin/bash
find . -print0 | cpio --null --quiet -H newc -o | gzip -9 -n > …/initrdtossd

Make them executable. Create folder to unpack and cd there.

sudo mkdir dir_initrdssd
sudo cp /boot/initrd ./
cd dir_initrdssd
sudo …/unpack.sh

  1. Patch init
    Initrd source will be in dir_initrdssd folder. Patch init script with following patch init.patch:initpatch.log (1.4 KB)
    (only .log text files are accepted by forum :( )
    Patch changes priority to PARTUUID over device name and adds "nvme
    " as accepted rootfs name.

  2. Pack back

sudo …/pack.sh

initrdtossd in folder with scripts will be created. Copy this file to /boot

sudo cp …/initrdtossd /boot/

  1. Edit extlinux.conf
    Find PARTUID of nvme partition with future rootfs.

lsblk -o NAME,PARTUUID
Output needed:
nvme0n1
└─nvme0n1p1 XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

Open /boot/extlinux/extlinux.conf

sudo gedit /boot/extlinux/extlinux.conf

Add APPEND line to primary boot and create backup boot
Example:

TIMEOUT 30
DEFAULT primary

MENU TITLE L4T boot options

LABEL primary
MENU LABEL primary kernel
LINUX /boot/Image
INITRD /boot/initrdtossd
APPEND ${cbootargs} root=PARTUUID=XXXXXXXX-XXXX-XXXX-XXXX->XXXXXXXXXXXX quiet

LABEL backup
MENU LABEL backup kernel
LINUX /boot/Image
INITRD /boot/initrd
APPEND ${cbootargs} quiet

  1. DON’T FORGET to copy rootfs to SSD!!!.
  2. Reboot and PROFIT

P.S. In blockquote “[two points]/” changed to “…/”. I don’t know how to fix it.

@crazy_yorick can you give pros and cons for both methods?

Ok.
Methods:

  1. systemd service
    In this case systemd service setssdroot.service runing before local-fs-pre.target is used ( after systemd-remount-fs.service from initrd, if initrd with systemd is used) . So it’s one of the first runing units when system starts up. But, there are parallel targets swap.target, cryptsetup-pre.target , various low-level services. (systemd bootup ) . This service executes script, which switches rootfs with systemctl call /bin/systemctl --no-block switch-root ${CHROOT_PATH}.

Pros:

  • No initrd or extlinux.conf required. So, work with JetPack 4.2, where kernel is located in special partition.

  • After building new kernel, default initrd update workflow is required, no manual modification of initrd (i.e. flash.sh ...).If system doesn’t use initrd, we need only install modules on SSD and eMMC.

  • If /dev/nvme0n1p1 is absent, default rootfs is used, because setssdroot.service will not be executed.

Cons:

  • It’s still hack:) .

  • There are units, executing in parallel with setssdroot.service, including udevd, thus, for some devices modules will be loaded from rootfs on eMMC while others from NVMe. That’s why we need to install same modules on both.

  • I don’t understand why, but we need copy of setssdroot.service and setssdroot.sh on NVMe, otherwise service completes with error (but still switches rootfs).

  1. extlinux.conf APPEND
    In this case APPEND line with new “root=…” parameter is added to extlinux.conf. At kernel loading stage cboot appends this line to kernel cmdline. So cmdline have 2 “root=…” parameters: first - “root=/dev/mmcblk0p1” from device tree for cboot, second - custom, appended with extlinux.conf directive.

Pros:

  • Normal system bootup sequence. Main and backup kernel could be different.

  • All modules are loaded from initrd or rootfs on NVMe only. One needs only to copy new kernel image and it’s initrd to /boot/ directory on eMMC and configure /boot/extlinux/extlinux.conf there. No modules installation for new kernel on eMMC required. (Except when the same kernel image is used as backup).

  • Backup kernel is set by extlinux.conf :

LABEL backup
MENU LABEL backup kernel
LINUX /boot/Image
INITRD /boot/initrd
APPEND ${cbootargs} quiet

It can have it’s own initrd and only it’s modules are installed on eMMC.

Cons:

  • Only JetPack 4.4 now is supported, because cboot with extlinux.conf parsing is required.

  • After building new kernel and updating initrd , init script should be patched manualy in case of default JetPack initrd. According @gtj, Dracut generated initrd works without modification

  • If NVMe is absent, linux is stuck in initrd. To boot from backup kernel and rootfs on eMMC command from debug terminal should be used.

4 Likes