Massflash encrypted nvme fails with Could not stat device /dev/mmcblk0 - No such file or directory

hello shai.segev,

here’re our steps to test on Orin-Nano + r36.4

$ sudo ./tools/backup_restore/l4t_backup_restore.sh -b -c -e nvme0n1 jetson-orin-nano-devkit
$ sudo ./tools/kernel_flash/l4t_initrd_flash.sh --use-backup-image --no-flash --network usb0 --massflash 1 jetson-orin-nano-devkit internal
$ sudo ./tools/kernel_flash/l4t_initrd_flash.sh --flash-only --massflash 1 --network usb0 --showlogs

Hello @JerryChang ,
Can you please confirm that the the last step is performed with a different Jetson and NVMe (not the ones the were backed-up). I want to make sure we are both following the same process.

Thanks.

hello shai.segev,

we’ve confirm massflash it works with different Orin Nano targets.

Hi @JerryChang ,
I tried the procedure with 6.1.
For some reason, on the duplicated setup, the boot order changes and the boot from the local drive is last on the list. This made me think that the jetson did not identify the boot partition.


How can this be fixed?
Also, I want to add disk encryption. What is the procedure for that? Do I need to backup a device encrypted with a generic key, or I also use a backup from an unencrypted device?

Thanks

hello shai.segev,

you may access UEFI menu for modification.
i.e. Device Manager → NVIDIA Configuration → L4T Configuration

Hello @JerryChang,

I am not sure I understand your answer. The original Jenson has the correct boot order. The boot order changes when I use a backup.
It is not feasible to manually change the configuration for each device after it is mass-flashed in the factory.

hello shai.segev,

there’s L4TConfiguration.dtbo to configure the boot order.
you may change the default boot order in L4TConfiguration.dts, and using ADDITIONAL_DTB_OVERLAY to apply it during image creation.
for instance, there’re exist dtbo to configure boot orders,
$ sudo ADDITIONAL_DTB_OVERLAY_OPT="BootOrderNvme.dtbo" ./tools/kernel_flash/l4t_initrd_flash.sh --use-backup-image --no-flash --network usb0 --massflash 1 jetson-orin-nano-devkit internal

please see-also developer guide, Boot Order Selection.

Thanks @JerryChang ,

I successfully duplicate jetson and nvme using the use-backup-image mass flash.
I am now trying the same but with disk encryption. I tried the following, but the duplicated jetson is not able to open the encrypted disk.
Create the golden image

sudo ./tools/kernel_flash/l4t_initrd_flash.sh --showlogs -p "-c bootloader/generic/cfg/flash_t234_qspi.xml" --no-flash --network usb0 jetson-orin-nano-devkit internal
sudo ROOTFS_ENC=1 EXT_NUM_SECTORS=468846000 ./tools/kernel_flash/l4t_initrd_flash.sh -S 221GiB ``-p "--generic-passphrase"`` --massflash 1 --showlogs --no-flash --external-device nvme0n1p1 -i ./sym2_t234/key -c ./tools/kernel_flash/flash_l4t_t234_nvme_rootfs_enc.xml --external-only --append --network usb0 jetson-orin-nano-devkit external
cd mfi_jetson-orin-nano-devkit/
sudo systemctl stop udisks2
sudo systemctl start nfs-kernel-server
sudo ./tools/kernel_flash/l4t_initrd_flash.sh --flash-only --massflash 1 --network usb0 --showlogs

backup

sudo ./tools/backup_restore/l4t_backup_restore.sh -b -c -e nvme0n1 jetson-orin-nano-devkit

Create image from backup

sudo ROOTFS_ENC=1 EXT_NUM_SECTORS=468846000 ./tools/kernel_flash/l4t_initrd_flash.sh --use-backup-image --no-flash --network usb0 --massflash 2 -i ./ekb.key ''-p "--generic-passphrase"'' jetson-orin-nano-devkit external

Flash

sudo ./tools/kernel_flash/l4t_initrd_flash.sh --flash-only --massflash 1 --network usb0 --showlogs

On reboot I get

ERROR fail to unlock the encrypted dev /dev/nvme0n1p2.
/bin/bash: line 1: crypt_UDA command not found

Can you please advice what is the correct way to use mass flash with disk encryption?
Thankss

hello shai.segev,

may I know what’s the real use-case? there’s security concern to restore an encrypted disk.

FYI,
here’s an appropriate approach for your testing,
please refer to Topic 291335 to create encrypted images with a generic key,
so that you may create a massflash (i.e. mfi_*.tar.gz) package to flash multiple devices simultaneously.

Hi @JerryChang,

I am developing a comprehensive procedure tailored for the factory to ensure the seamless manufacturing of units equipped with fully secured Jetson devices (secure boot, disk encryption etc). This procedure will not only focus on the security aspects of the Jetson units but also include the installation of our complete software solution as part of the production process.

Our software solution is a robust and integrated system incorporating Docker Containers, Databases and Additional Components.

By implementing this procedure, the factory will be equipped to consistently produce units ready to use out of the box, with all required security measures in place and the software fully configured. This will streamline production, reduce setup time post-manufacturing, and ensure compliance with our security and operational standards.

So the question is, how can I use massflash with “–use-backup-images” and "“genereric-key”? Or is there another way to achieve this without providing the factory with complete installation scripts and software?

hello shai.segev,

as mentioned above, that’s not supported due to security concern.

please refer to above to create a massflash (i.e. mfi_*.tar.gz ) package to flash multiple devices simultaneously.

or…
in case you’re going to fuse a target, here’s also an approach to avoid revealing any keys.
you may refer to… $OUT/Linux_for_Tegra/bootloader/README_Massfuse.txt to create massfuse blob.
re-cap as below…

The massfuse blob is generated in relatively safer place such as HQ and used to fuse one or more
Jetson devices simultaneously in a place such as factory floor without revealing any SBK or PKC key files in human readable form.

you should executed with “OFFLINE” approach (without device connected) for your use-case to create a fuse blob,
here’s an example,
$ sudo BOARDID=3767 BOARDSKU=0003 FAB=300 BOARDREV=L.3 FUSELEVEL=fuselevel_production CHIPREV=1 CHIP_SKU=D5 ./nvmassfusegen.sh -i 0x23 --auth NS -X t234_odmfuse_pkc.xml jetson-orin-nano-devkit

I am trying to understand if there is a way to create a massflash package that will contain our dockers, databases and other proprietary applications and use it to flash onto an encrypted system?

Thanks

that’s demonstration based-on sample root file system, you may refer to Manually Generate a Root File System for your use-case.

Hi @JerryChang,

Do you have more documentation to help us understand how to create roofs from a working Jetson?
I have tried various methods of creating a copy and loading it onto roofs, but the Jetson will not boot properly after flashing in all cases.
For example, I tried:

  • Create tarball on Jetson
sudo tar -cvpzf /tmp/backup.tar.gz \
--exclude=/proc \
--exclude=/tmp \
--exclude=/mnt \
--exclude=/dev \
--exclude=/sys \
--exclude=/run  \
--exclude=/media /
  • Extract to rootfs
sudo tar -xvpzf backup.tar.gz -C rootfs
  • When trying to flash, the flash fails to connect with ssh without additional errors

you should based-on x86 host machine to customize root file system.

So, for example, should I run the following on the host?

#!/bin/bash
set -e

L4T_DIR="Linux_for_Tegra"
ROOTFS_DIR="$L4T_DIR/rootfs"

sudo ./Linux_for_Tegra/apply_binaries.sh

sudo chroot "$ROOTFS_DIR" /bin/bash <<EOF
set -e


apt update
apt install -y docker.io

mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOL
{
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}
EOL


docker pull hello-world:latest

exit

And then flash the device…

hello shai.segev,

you may see-also $OUTLinux_for_Tegra/nv_tools/scripts/nv_customize_rootfs.sh

Hi @JerryChang

Do you have a verified procedure to load docker images onto rootfs?
I am able to install docker (Tried both docker-ce and docker.io), installation goes fine, but when I try to run dockerd I get the following error:

# dockerd
INFO[2025-01-25T14:36:16.451249135Z] Starting up                                  
WARN[2025-01-25T14:36:16.457953872Z] Error while setting daemon root propagation, this is not generally critical but may cause some functionality to not work or fallback to less desirable behavior  dir=/var/lib/docker error="error getting daemon root's parent mount: Can't find mount point of /var/lib/docker"
INFO[2025-01-25T14:36:16.462349029Z] containerd not running, starting managed containerd 
INFO[2025-01-25T14:36:16.473840120Z] started new containerd process                address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=17977
INFO[2025-01-25T14:36:16.765103538Z] starting containerd                           revision= version=1.7.12
INFO[2025-01-25T14:36:16.878021261Z] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
INFO[2025-01-25T14:36:16.907286322Z] skip loading plugin "io.containerd.snapshotter.v1.aufs"...  error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.8.0-51-generic\\n\"): skip plugin" type=io.containerd.snapshotter.v1
INFO[2025-01-25T14:36:16.907862895Z] loading plugin "io.containerd.event.v1.exchange"...  type=io.containerd.event.v1
INFO[2025-01-25T14:36:16.908089345Z] loading plugin "io.containerd.internal.v1.opt"...  type=io.containerd.internal.v1
INFO[2025-01-25T14:36:16.910470652Z] loading plugin "io.containerd.warning.v1.deprecations"...  type=io.containerd.warning.v1
INFO[2025-01-25T14:36:16.910616067Z] loading plugin "io.containerd.snapshotter.v1.blockfile"...  type=io.containerd.snapshotter.v1
INFO[2025-01-25T14:36:16.911088341Z] skip loading plugin "io.containerd.snapshotter.v1.blockfile"...  error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
INFO[2025-01-25T14:36:16.911164671Z] loading plugin "io.containerd.snapshotter.v1.btrfs"...  type=io.containerd.snapshotter.v1
WARN[2025-01-25T14:36:16.911749486Z] failed to load plugin io.containerd.snapshotter.v1.btrfs  error="failed to find the mount info for \"/var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs\""
INFO[2025-01-25T14:36:16.911843388Z] loading plugin "io.containerd.snapshotter.v1.devmapper"...  type=io.containerd.snapshotter.v1
WARN[2025-01-25T14:36:16.911999753Z] failed to load plugin io.containerd.snapshotter.v1.devmapper  error="devmapper not configured"
INFO[2025-01-25T14:36:16.912050692Z] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2025-01-25T14:36:16.912594665Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2025-01-25T14:36:16.915210817Z] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2025-01-25T14:36:16.915630815Z] skip loading plugin "io.containerd.snapshotter.v1.zfs"...  error="failed to find the mount info for \"/var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs\": skip plugin" type=io.containerd.snapshotter.v1
INFO[2025-01-25T14:36:16.915711522Z] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2025-01-25T14:36:16.915904596Z] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2025-01-25T14:36:16.916163617Z] could not use snapshotter btrfs in metadata plugin  error="failed to find the mount info for \"/var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs\""
WARN[2025-01-25T14:36:16.916220824Z] could not use snapshotter devmapper in metadata plugin  error="devmapper not configured"
INFO[2025-01-25T14:36:16.916385587Z] metadata content store policy set             policy=shared
INFO[2025-01-25T14:36:16.922773193Z] loading plugin "io.containerd.gc.v1.scheduler"...  type=io.containerd.gc.v1
INFO[2025-01-25T14:36:16.925485075Z] loading plugin "io.containerd.differ.v1.walking"...  type=io.containerd.differ.v1
INFO[2025-01-25T14:36:16.926346961Z] loading plugin "io.containerd.lease.v1.manager"...  type=io.containerd.lease.v1
INFO[2025-01-25T14:36:16.926535971Z] loading plugin "io.containerd.streaming.v1.manager"...  type=io.containerd.streaming.v1
INFO[2025-01-25T14:36:16.926783640Z] loading plugin "io.containerd.runtime.v1.linux"...  type=io.containerd.runtime.v1
INFO[2025-01-25T14:36:16.927233268Z] loading plugin "io.containerd.monitor.v1.cgroups"...  type=io.containerd.monitor.v1
INFO[2025-01-25T14:36:16.932554709Z] loading plugin "io.containerd.runtime.v2.task"...  type=io.containerd.runtime.v2
INFO[2025-01-25T14:36:16.933912811Z] loading plugin "io.containerd.runtime.v2.shim"...  type=io.containerd.runtime.v2
INFO[2025-01-25T14:36:16.934028541Z] loading plugin "io.containerd.sandbox.store.v1.local"...  type=io.containerd.sandbox.store.v1
INFO[2025-01-25T14:36:16.934136054Z] loading plugin "io.containerd.sandbox.controller.v1.local"...  type=io.containerd.sandbox.controller.v1
INFO[2025-01-25T14:36:16.934295559Z] loading plugin "io.containerd.service.v1.containers-service"...  type=io.containerd.service.v1
INFO[2025-01-25T14:36:16.934438774Z] loading plugin "io.containerd.service.v1.content-service"...  type=io.containerd.service.v1
INFO[2025-01-25T14:36:16.934573105Z] loading plugin "io.containerd.service.v1.diff-service"...  type=io.containerd.service.v1
INFO[2025-01-25T14:36:16.934746045Z] loading plugin "io.containerd.service.v1.images-service"...  type=io.containerd.service.v1
INFO[2025-01-25T14:36:16.934919291Z] loading plugin "io.containerd.service.v1.introspection-service"...  type=io.containerd.service.v1
INFO[2025-01-25T14:36:16.935058437Z] loading plugin "io.containerd.service.v1.namespaces-service"...  type=io.containerd.service.v1
INFO[2025-01-25T14:36:16.935189903Z] loading plugin "io.containerd.service.v1.snapshots-service"...  type=io.containerd.service.v1
INFO[2025-01-25T14:36:16.935337313Z] loading plugin "io.containerd.service.v1.tasks-service"...  type=io.containerd.service.v1
INFO[2025-01-25T14:36:16.936178910Z] loading plugin "io.containerd.grpc.v1.containers"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.936346717Z] loading plugin "io.containerd.grpc.v1.content"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.936463474Z] loading plugin "io.containerd.grpc.v1.diff"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.936579529Z] loading plugin "io.containerd.grpc.v1.events"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.936730361Z] loading plugin "io.containerd.grpc.v1.images"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.936842006Z] loading plugin "io.containerd.grpc.v1.introspection"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.937038567Z] loading plugin "io.containerd.grpc.v1.leases"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.937145840Z] loading plugin "io.containerd.grpc.v1.namespaces"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.937265705Z] loading plugin "io.containerd.grpc.v1.sandbox-controllers"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.937404803Z] loading plugin "io.containerd.grpc.v1.sandboxes"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.937505868Z] loading plugin "io.containerd.grpc.v1.snapshots"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.937647153Z] loading plugin "io.containerd.grpc.v1.streaming"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.937743422Z] loading plugin "io.containerd.grpc.v1.tasks"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.937871552Z] loading plugin "io.containerd.transfer.v1.local"...  type=io.containerd.transfer.v1
INFO[2025-01-25T14:36:16.938787998Z] loading plugin "io.containerd.grpc.v1.transfer"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.939002233Z] loading plugin "io.containerd.grpc.v1.version"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.939074149Z] loading plugin "io.containerd.internal.v1.restart"...  type=io.containerd.internal.v1
INFO[2025-01-25T14:36:16.941172751Z] loading plugin "io.containerd.tracing.processor.v1.otlp"...  type=io.containerd.tracing.processor.v1
INFO[2025-01-25T14:36:16.941495332Z] skip loading plugin "io.containerd.tracing.processor.v1.otlp"...  error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
INFO[2025-01-25T14:36:16.941563072Z] loading plugin "io.containerd.internal.v1.tracing"...  type=io.containerd.internal.v1
INFO[2025-01-25T14:36:16.942186424Z] skipping tracing processor initialization (no tracing plugin)  error="no OpenTelemetry endpoint: skip plugin"
INFO[2025-01-25T14:36:16.946812271Z] loading plugin "io.containerd.grpc.v1.healthcheck"...  type=io.containerd.grpc.v1
INFO[2025-01-25T14:36:16.946965079Z] loading plugin "io.containerd.nri.v1.nri"...  type=io.containerd.nri.v1
INFO[2025-01-25T14:36:16.947141003Z] NRI interface is disabled by configuration.  
INFO[2025-01-25T14:36:16.951257313Z] serving...                                    address=/var/run/docker/containerd/containerd-debug.sock
INFO[2025-01-25T14:36:16.951519922Z] serving...                                    address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2025-01-25T14:36:16.952362260Z] serving...                                    address=/var/run/docker/containerd/containerd.sock
INFO[2025-01-25T14:36:16.954605479Z] containerd successfully booted in 0.199779s  
INFO[2025-01-25T14:36:17.019141236Z] detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf 
INFO[2025-01-25T14:36:17.681109507Z] Loading containers: start.                   
WARN[2025-01-25T14:36:17.712490154Z] Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/6.8.0-51-generic
modprobe: WARNING: Module br_netfilter not found in directory /lib/modules/6.8.0-51-generic
, error: exit status 1 
INFO[2025-01-25T14:36:17.731727334Z] unable to detect if iptables supports xlock: 'iptables --wait -L -n': `iptables/1.8.7 Failed to initialize nft: Protocol not supported`  error="exit status 1"
INFO[2025-01-25T14:36:18.060362342Z] stopping healthcheck following graceful shutdown  module=libcontainerd
INFO[2025-01-25T14:36:18.060583854Z] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=moby
INFO[2025-01-25T14:36:18.060814403Z] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables/1.8.7 Failed to initialize nft: Protocol not supported
 (exit status 1)

So far, this is the mounts I figured are required to solve previous errors:

cd Linux_for_Tegra/rootfs
sudo mount --bind /dev/ dev/
sudo mount --bind /sys/ sys/
sudo mount --bind /proc/ proc/
sudo mount --bind /dev/pts/ dev/pts
sudo mount --bind /var/run/docker.sock /path/to/chroot/var/run/docker.sock
sudo mount --bind /sys/fs/cgroup/ sys/fs/cgroup/

sudo cp /etc/resolv.conf etc/resolv.conf.host
sudo mv etc/resolv.conf etc/resolv.conf.saved
sudo mv etc/resolv.conf.host etc/resolv.conf


sudo LC_ALL=C LANG=C.UTF-8 chroot . /bin/bash

Thanks!

hello shai.segev,

it looks the question of massflash has resolved.
please have summarize, and let’s close this thread.


nope, I don’t have experience to load docker images onto rootfs.
please file another new topic, may the community share some insights.