Unable to Backup Restore - Nvidia Jetson Orin Nano Devkit

Hello,

I have an image of a Jetson Orin Nano (Board 3767-300-0005-K.2-1-1-jetson-orin-nano-devkit-super) that I have been trying to restore on another Jetson Orin Nano (Board 3767-300-0005-R.1-1-1-jetson-orin-nano-devkit-super).

Both boards have been previously flashed using the SDK manager and are running firmware version 36.4.3. Each Jetson has a 2TB nvme, which are what each device boots from.

I am following the directions outlined in README_backup_restore.txt to backup and restore the image of my Orin Nano, however I am running into the following error:

Waiting for device to expose ssh ......Waiting for device to expose ssh ...Device has booted into initrd. You can ssh to the target by the command:
$ ssh root@fc00:1:1:0::2
Cleaning up...
Log is saved to Linux_for_Tegra/initrdlog/flash_1-10_0_20250602-093530.log 
Run command: 
ln -s /proc/self/fd /dev/fd && mount -o nolock [fc00:1:1::1]:/home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra/tools/backup_restore /mnt && /mnt/nvrestore_partitions.sh -e nvme0n1 -n 
 on root@fc00:1:1::2
bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
/mnt/images ~
nvrestore_partitions.sh: Use the default nvpartitionmap.txt as the index file.
Comparing FIELDS[2]: '3767-300-0005-K.2-1-1-jetson-orin-nano-devkit-super-nvme-' with BOARD_SPEC: '3767-300-0005-R.1-1-1-jetson-orin-nano-devkit-super-nvme-'
nvrestore_partitions.sh: You are trying to flash images from a board model that does not
match the current board you're flashing onto.
40+0 records in
40+0 records out
20480 bytes (20 kB, 20 KiB) copied, 0.000454784 s, 45.0 MB/s
Error: Unable to partprobe /dev/nvme0n1

I modified the nvrestore_partitions.sh script so that the slight difference in the board models does not halt the flashing, however I still run into the issue of being unable to partprobe /dev/nvme0n1.

The arguments that I am using to run the script are as follows:

For backing up the image:

sudo ./tools/backup_restore/l4t_backup_restore.sh -e nvme0n1 -b -c jetson-orin-nano-devkit-super

I can see after running this script that it was successful and that the image was stored in /tools/backup_restore/images

For restoring the image on another device:

sudo ./tools/backup_restore/l4t_backup_restore.sh -e nvme0n1 -r jetson-orin-nano-devkit-super

The udisks2.service is stopped, and the nfs-kernel-server was restarted after installing the flash dependencies

Does anyone know why partprobe would be failing on the target device?

Does the nvme have to be unmounted before being restored with the backup script?

The two device models should be compatible… is this not actually the case for the method of copying nvme partitions?

Thanks in advance for the help.

Hi,

Please refer to the README_backup_restore.txt workflow 3 to massflash the backup image.

Thanks

Thanks for the response.

Following the directions in workflow 3, I executed the following:

I placed the Jetson to-be-cloned (booted from the 2TB nvme) in recovery mode and executed the following command

sudo ./tools/backup_restore/l4t_backup_restore.sh -e nvme0n1 -b -c jetson-orin-nano-devkit-super

which finished successfully with the following output:

nvbackup_partitions.sh: Backup complete (after command prompt popping up)
Backup image is stored in /home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra/tools/backup_restore/images
/home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra/tools/backup_restore/images /home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra
/home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra/tools/kernel_flash/images/external /home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra/tools/backup_restore/images /home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra
/home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra/tools/backup_restore/images /home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra
/home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra
Operation finishes. You can manually reset the device

The following nvpartitionmap.txt was created:

board_spec,3767-300-0005-K.2-1-1-jetson-orin-nano-devkit-super-
nvme0n1_gptmbr.img,gpt_1,0,40,,7d73d6eaf57999b5958164ae45e7cd83d77b9e84d63b9896429dc3726dfe9acc
nvme0n1_gptbackup.img,gpt_2,4000797327,33,,9ab07d297fdd93ac53e79f71455380807744c68f327a2ab9de76e6c35473200d
nvme0n1p1.tar.zst,nvme0n1p1,0,3997747272,tz,248854ab49598a492d1c28731bd03b4076c9861841b643b9f1073cad532b61c4
nvme0n1p2_bak.img,nvme0n1,40,262144,,09f2ed89d22ec35d43f9565038493a87f943c36615171ee0f4322cf04c5e4c95
nvme0n1p3_bak.img,nvme0n1,262184,1536,,82bf561028830249fe4fa1b00f5069f65a2e093dcfebcb17ee1a5857bb411fa3
nvme0n1p4_bak.img,nvme0n1,263720,64768,,fdb781c8fdb26c3c5b1868bbea8605f414fccac77745c6ab289ca07ec90628a6
nvme0n1p5_bak.img,nvme0n1,328488,262144,,6e3024869d99858526195b51d5e23dde68f35ace6475a30ef71264796f8faba4
nvme0n1p6_bak.img,nvme0n1,590632,1536,,07b3fa2c52f19f61d5fbbae510e7d2120a8da07899b6e3dfdb918cb5ff16186e
nvme0n1p7_bak.img,nvme0n1,592168,64768,,50893c89b25e1603955b0fe331e1a4aab68587d661ada4248fad90ef72c1bd2b
nvme0n1p8_bak.img,nvme0n1,656936,163840,,7327373b7c5013fa715114f96ff15585e7eb369de20c8695a03945c98456ce03
nvme0n1p9_bak.img,nvme0n1,820776,1024,,839bb3028eece67e4da4f4e330e0884173444c93c6d7710c2d930128b4e86d8b
nvme0n1p10_bak.img,nvme0n1,821800,131072,,5573bf2cefdd67a32deb2ea0cf4f158358f12b29cf0c18ad30a208d45e3421d3
nvme0n1p11_bak.img,nvme0n1,952872,163840,,20892a2d7004cfe26ef624c49a6dad009f0ca82e8bb3b485cde0feac12c4b505
nvme0n1p12_bak.img,nvme0n1,1116712,1024,,8c4cdaa334f3bf91072e61f216ebabd9db4596d02bc0b551114fd2275c58f824
nvme0n1p13_bak.img,nvme0n1,1117736,131072,,0b873a318a9f4d9b0b050e1c5a9c97cd5ad72d64eb081bcb90dc2f54c1b20b04
nvme0n1p14_bak.img,nvme0n1,1248832,819200,,4f43ad9aa3574b3da7d1ae7e4faf616db4295033eb87d3e6c5fc3bd7d6e2cb31
nvme0n1p15_bak.img,nvme0n1,2068032,982016,,2a416d87d44a5efe4642d752b2c0e8c45e5704567a753c566b76d2dabb8c09c3
QSPI0.img,qspi0,0,67108864,,91188138a560cc08736307cbd5907114f2851e86a135cb39f8f5d3ce0b8c487e

After that, I ran the following to generate the massflash package:

sudo ./tools/kernel_flash/l4t_initrd_flash.sh --use-backup-image --no-flash -- network usb0 --massflash 2 jetson-orin-nano-devkit-super internal

which again finished successfully with the following output:

Welcome to Tegra Flash
version 1.0.0
Type ? or help for help and q or quit to exit
Use ! to execute system commands
 

 Entering RCM boot

[   0.0219 ] mb1_t234_prod_aligned_sigheader.bin.encrypt filename is from --mb1_bin
[   0.0219 ] psc_bl1_t234_prod_aligned_sigheader.bin.encrypt filename is from --psc_bl1_bin
[   0.0219 ] rcm boot with presigned binaries
[   0.0225 ] Generating blob for T23x
[   0.0236 ] tegrahost_v2 --chip 0x23 0 --generateblob blob.xml blob.bin
[   0.0240 ] The number of images in blob is 19
[   0.0250 ] blobsize is 81675081
[   0.0250 ] Added binary blob_uefi_jetson_minimal_with_dtb_sigheader.bin.encrypt of size 2043968
[   0.0801 ] Added binary blob_pscfw_t234_prod_sigheader.bin.encrypt of size 310768
[   0.0807 ] Added binary blob_mce_flash_o10_cr_prod_sigheader.bin.encrypt of size 187120
[   0.0810 ] Added binary blob_tsec_t234_sigheader.bin.encrypt of size 176128
[   0.0812 ] Added binary blob_applet_t234_sigheader.bin.encrypt of size 279808
[   0.0815 ] Not supported type: mb2_applet
[   0.0816 ] Added binary blob_mb2_t234_with_mb2_cold_boot_bct_MB2_sigheader.bin.encrypt of size 440944
[   0.0820 ] Added binary blob_xusb_t234_prod_sigheader.bin.encrypt of size 164864
[   0.0823 ] Added binary blob_nvpva_020_sigheader.fw.encrypt of size 2164640
[   0.0838 ] Added binary blob_display-t234-dce_sigheader.bin.encrypt of size 12070416
[   0.0942 ] Added binary blob_nvdec_t234_prod_sigheader.fw.encrypt of size 294912
[   0.0970 ] Added binary blob_bpmp_t234-TE950M-A1_prod_sigheader.bin.encrypt of size 1027008
[   0.0986 ] Added binary blob_tegra234-bpmp-3767-0003-3768-super_with_odm_sigheader.dtb.encrypt of size 264192
[   0.0997 ] Added binary blob_camera-rtcpu-t234-rce_sigheader.img.encrypt of size 458096
[   0.1003 ] Added binary blob_adsp-fw_sigheader.bin.encrypt of size 415008
[   0.1007 ] Added binary blob_spe_t234_sigheader.bin.encrypt of size 270336
[   0.1012 ] Added binary blob_tos-optee_t234_sigheader.img.encrypt of size 1887312
[   0.1017 ] Added binary blob_eks_t234_sigheader.img.encrypt of size 9232
[   0.1021 ] Added binary blob_boot.img of size 58959872
[   0.1433 ] Added binary blob_tegra234-p3768-0000+p3767-0005-nv-super.dtb of size 249353
[   0.3030 ] All RCM required files are saved in rcmboot_blob folder
rcmboot_blob generated.

*** no-flash flag enabled. Exiting now... *** 

User can run above saved command in factory environment without 
providing pkc and sbk keys to flash a device

Example:

    $ cd bootloader 
    $ sudo bash ./flashcmd.txt

Save initrd flashing command parameters to /home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra/tools/kernel_flash/initrdflashparam.txt
/tmp/tmp.LWDhvX6iYX /home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra
writing boot image config in bootimg.cfg
extracting kernel in zImage
extracting ramdisk in initrd.img
/tmp/tmp.LWDhvX6iYX/initrd /tmp/tmp.LWDhvX6iYX /home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra
96292 blocks
257148 blocks
/tmp/tmp.LWDhvX6iYX /home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra
flashimg0=boot0.img
/home/candor-admin/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_ORIN_NANO_TARGETS/Linux_for_Tegra
Success
Cleaning up...
Finish generating flash package.
Put device in recovery mode, run with option --flash-only to flash device.

Finally, I flashed the other device with the following command:

sudo ./tools/kernel_flash/l4t_initrd_flash.sh --flash-only --massflash 1 --network usb0

which failed with the following log:

***************************************
*                                     *
*  Step 3: Start the flashing process *
*                                     *
***************************************
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for device to expose ssh ......Waiting for device to expose ssh ...Run command: flash on fc00:1:1:0::2
SSH ready
blockdev: cannot open /dev/mmcblk0boot0: No such file or directory
[ 0]: l4t_flash_from_kernel: Serial Number: 1421523054522
[ 0]: l4t_flash_from_kernel: Starting to create gpt for emmc
Active index file is /mnt/internal/flash.idx
Number of lines is 1
max_index=0
[ 0]: l4t_flash_from_kernel: Successfully create gpt for emmc
[ 0]: l4t_flash_from_kernel: Starting to create gpt for external device
Active index file is /mnt/external/flash.idx
Number of lines is 17
max_index=16
writing item=0, 9:0:primary_gpt,0,20480,nvme0n1_gptmbr.img,20480,fixed-<reserved>-0,a7210cf9f0a0496ab692f38623168482dc183303
Writing primary_gpt partition with nvme0n1_gptmbr.img
20480 bytes from /mnt/external/nvme0n1_gptmbr.img to /dev/nvme0n1: 1KB block=20 remainder=0
dd if=/mnt/external/nvme0n1_gptmbr.img of=/dev/nvme0n1 bs=1K skip=0  seek=0 count=20
20+0 records in
20+0 records out
20480 bytes (20 kB, 20 KiB) copied, 0.00171786 s, 11.9 MB/s
Writing primary_gpt partition done
Error: Invalid argument during seek for read on /dev/nvme0n1
[ 27]: l4t_flash_from_kernel: Error: partprobe failed. This indicates that:
 -   the xml indicates the gpt is larger than the device storage
 -   the xml might be invalid
 -   the device might have a problem.
 Please make correction.
Flash failure
Either the device cannot mount the NFS server on the host or a flash command has failed. Check your network setting (VPN, firewall,...) to make sure the device can mount NFS server. Debug log saved to /tmp/tmp.HUJdACyMM3. You can access the target's terminal through "sshpass -p root ssh root@fc00:1:1:0::2" 
Cleaning up...

I ran this following disabling the firewall on my PC with sudo ufw disable. Additionally, I verified that I had started the nfs-kernel-server.service as well as disabled the udisks2.service.

Update: I noticed that while I was using 2 TB nvmes in both Jetsons, they were not identical nvmes.

I recreated an image on a third Jetson that had an identical 2 TB SSD to the original Jetson that I was trying to flash, and when following the backup_restore directions, the flashing of the image on the original Jetson was successful.

Assuming that the cause of my issue is a discrepancy in the SSD’s, is there a way to perform the same image restoration on a Jetson with a slightly different nvme?

Hi,

It might be related to the real size NVMe has.
For example, you could check your storage with below commands.

sudo fdisk -l  /dev/mmcblk0
Disk /dev/mmcblk0: 59.28 GiB, 63652757504 bytes, 124321792 sectors
Units: sectors of 1 * 512 = 512 bytes

Thus, if two NVMe have different number of sectors, it might mass flash fail.
You could add -s <APP_SIZE> in your flashing command to make the APP_SIZE smaller.

Thanks

I tried a few more combinations of NVMe’s, and found that the flashing succeeded as long as the nvme from the backed up jetson was at most the same size as the target of the flashing.

Thanks for the help!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.