Re flashing Orin NX JP5.1.2 using a prepared system image

Hi,

I have a Orin NX board (p3767+p3768) . I am able to flash the Orin NX and boot from the USB (128 Gb) using the below command:

sudo ADDITIONAL_DTB_OVERLAY_OPT="BootOrderUsb.dtbo" ./tools/kernel_flash/l4t_initrd_flash.sh --external-device sda1 -c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/t186ref/cfg/flash_t234_qspi.xml" --showlogs --network usb0 jetson-orin-nano-devkit internal

I prepared the system according to our requirements and took the backup of the entire system image using:

dd if=/dev/sda1 | ssh user@laptop_ip dd of=/target/path/image.raw

sudo mksparse -v --fillpattern=0 system_image.img.raw system.img

I have the system image now which is of 122.1 GB
I want to flash this image to the Orin now

I replace this backup system.img in Linux_for_Tegra/tools/kernel_flash/images/external/system.img

I used the below commands to flash:
sudo ADDITIONAL_DTB_OVER\LAY_OPT="BootOrderNvme.dtbo" ./tools/kernel_flash/l4t_initrd_flash.sh --flash-only --external-device sda1 -c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/t186ref/cfg/flash_t234_qspi.xml" --showlogs --network usb0 jetson-orin-nano-devkit internal

It gets struck at the below.

[   3.7387 ] Sending membct and RCM blob
[   3.7392 ] tegrarcm_v2 --instance 3-3 --chip 0x23 0 --pollbl --download bct_mem mem_rcm_sigheader.bct.encrypt --download blob blob.bin
[   3.7396 ] BL: version 1.2.0.0-t234-54845784-562369e5 last_boot_error: 0
[   3.7476 ] Sending bct_mem
[   3.7562 ] Sending blob

flashlog.txt (51.3 KB)

The board does not get flashed. I am not sure if the command I am using to re use system image is correct.

Please advice, thanks.

The recommended way is to use our backup/restore tool:
https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/SD/FlashingSupport.html#backing-up-and-restoring-a-jetson-device

@DaveYYY

Hi,
so do i run
$ sudo ./tools/backup_restore/l4t_backup_restore.sh -e sda1 -b jetson-orin-nano-devkit
to first take a backup of the system image ?

YES, but sda instead of sda1.
Then run

sudo ./tools/backup_restore/l4t_backup_restore.sh -e sda -r jetson-orin-nano-devkit

to restore the image.

@DaveYYY

I was able to take the back system image

I ran the command to restore i saw the restoring percentage and the power got disconnected.
When i ran it again, in FC recovery mode, i am encountering the below:

Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for device to expose ssh ......RTNETLINK answers: File exists
RTNETLINK answers: File exists
Device has booted into initrd. You can ssh to the target by the command:
$ ssh root@fe80::1%enx225b4d854b13
Cleaning up...
Log is saved to Linux_for_Tegra/initrdlog/flash_3-3_0_20240626-125556.log 
Run command: 
ln -s /proc/self/fd /dev/fd && mount -o nolock [fc00:1:1::1]:/home/sanya/Orin/Image/JetPack_5.1.2_Linux_JETSON_ORIN_NX_TARGETS/Linux_for_Tegra/tools/backup_restore /mnt && /mnt/nvrestore_partitions.sh -e sda -n 
 on root@fc00:1:1::2
/mnt/images ~
nvrestore_partitions.sh: Use the default nvpartitionmap.txt as the index file.
partx: specified range <1:0> does not make sense

It’s a bug where brand new disks that have not been formatted before cannot be used.
There is no partition table on the disk so partx cannot delete it.

Download 5.1.3 BSP and use the updated version of Linux_for_Tegra/tools/backup_restore/nvrestore_partitions.sh to fix it.
Or directly patch it:

diff --git a/scripts/backup-restore/nvrestore_partitions.sh b/scripts/backup-restore/nvrestore_partitions.sh
index 78ae589..695a700 100755
--- a/scripts/backup-restore/nvrestore_partitions.sh
+++ b/scripts/backup-restore/nvrestore_partitions.sh
@@ -313,10 +313,18 @@
 			echo "${SCRIPT_NAME} Checksum of ${FIELDS[2]} does not match the checksum in the index file."
 			exit 1
 		fi
-		# partx delete must be called before flashing, and partx add after flashing.
-		partx -d "/dev/${INTERNAL_STORAGE_DEVICE}"
+		# Delete previous GPT if it exists.
+		if partx -s "/dev/${INTERNAL_STORAGE_DEVICE}" >/dev/null 2>&1; then
+			partx -d "/dev/${INTERNAL_STORAGE_DEVICE}"
+		fi
+		# Flash GPT image, refresh and validate.
 		dd if="${FIELDS[1]}" of="/dev/${INTERNAL_STORAGE_DEVICE}"
+		sync
 		partx -v -a "/dev/${INTERNAL_STORAGE_DEVICE}"
+		if ! partx -s "/dev/${INTERNAL_STORAGE_DEVICE}" >/dev/null 2>&1; then
+			echo "Error: GPT does not exist on the /dev/${INTERNAL_STORAGE_DEVICE}"
+			exit 1
+		fi
 		GPT_EXISTS=true
 		break
 	fi

@DaveYYY

Hi, I made the changes to nvrestore_partitions.sh , I still encounter the same issue.

***************************************
*                                     *
*  Step 3: Start the flashing process *
*                                     *
***************************************
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for device to expose ssh ......RTNETLINK answers: File exists
RTNETLINK answers: File exists
Device has booted into initrd. You can ssh to the target by the command:
$ ssh root@fe80::1%enxa29ee6977ae6
Cleaning up...
Log is saved to Linux_for_Tegra/initrdlog/flash_3-3_0_20240626-140041.log 
Run command: 
ln -s /proc/self/fd /dev/fd && mount -o nolock [fc00:1:1::1]:/home/sanya/Orin/Image/JetPack_5.1.2_Linux_JETSON_ORIN_NX_TARGETS/Linux_for_Tegra/tools/backup_restore /mnt && /mnt/nvrestore_partitions.sh -e sda -n 
 on root@fc00:1:1::2
/mnt/images ~
nvrestore_partitions.sh: Use the default nvpartitionmap.txt as the index file.
partx: specified range <1:0> does not make sense

nvrestore_partitions.sh

# The GPT must be the first partition flashed, so this block ensures that the
# GPT exists and is flashed first.
for value in $(grep -v -e '(^ *$|^#)' < "${FILE_NAME}"); do
        declare -a FIELDS
        for part in {1..6}; do
                FIELDS[part]=$(echo "$value" | awk -F, -v part=${part} '{print $part}')
        done
        if [ "${FIELDS[2]}" = 'gpt_1' ]; then
                checksum=$(sha256sum "${FIELDS[1]}" | awk '{print $1}')
                if [ "${checksum}" != "${FIELDS[6]}" ]; then
                        echo "${SCRIPT_NAME} Checksum of ${FIELDS[2]} does not match the checksum in the index file."
                        exit 1
                fi
                # Delete previous GPT if it exists.
                if partx -s "/dev/${INTERNAL_STORAGE_DEVICE}" >/dev/null 2>&1; then
                        partx -d "/dev/${INTERNAL_STORAGE_DEVICE}"
                fi
                # Flash GPT image, refresh and validate.
                dd if="${FIELDS[1]}" of="/dev/${INTERNAL_STORAGE_DEVICE}"
                sync
                partx -v -a "/dev/${INTERNAL_STORAGE_DEVICE}"
                if ! partx -s "/dev/${INTERNAL_STORAGE_DEVICE}" >/dev/null 2>&1; then
                        echo "Error: GPT does not exist on the /dev/${INTERNAL_STORAGE_DEVICE}"
                        exit 1
                fi
                GPT_EXISTS=true
                break
        fi
done

Can the new USB stick be used normally on other devices?
Or get another USB cable and try again.

@DaveYYY

I restored the image and manually reset the Orin.

But it does not boot up completely:


[2024-06-26 15:03:41.974] 
[2024-06-26 15:03:41.974] ESC   to enter Setup.
[2024-06-26 15:03:41.974] F11   to enter Boot Manager Menu.
[2024-06-26 15:03:41.979] Enter to continue boot.
[2024-06-26 15:03:41.979] **  WARNING: Test Key is used.  **
[2024-06-26 15:03:42.152] 
[2024-06-26 15:03:42.152]   Error: Could not detect network connection.
[2024-06-26 15:03:43.287] 
[2024-06-26 15:03:43.287]   Error: Could not detect network connection.
[2024-06-26 15:03:45.932] 
                          [2024-06-26 15:03:45.932] L4TLauncher: Attempting Direct Boot
[2024-06-26 15:03:51.808] ��I/TC: Secondary CPU 1 initializing
[2024-06-26 15:03:51.808] I/TC: Secondary CPU 1 switching to normal world boot
[2024-06-26 15:03:51.829] I/TC: Secondary CPU 2 initializing
[2024-06-26 15:03:51.849] I/TC: Secondary CPU 2 switching to normal world boot
[2024-06-26 15:03:51.849] I/TC: Secondary CPU 3 initializing
[2024-06-26 15:03:51.869] I/TC: Secondary CPU 3 switching to normal world boot
[2024-06-26 15:03:51.897] I/TC: Secondary CPU 4 initializing
[2024-06-26 15:03:51.897] I/TC: Secondary CPU 4 switching to normal world boot
[2024-06-26 15:03:51.918] I/TC: Secondary CPU 5 initializing
[2024-06-26 15:03:51.918] I/TC: Secondary CPU 5 switching to normal world boot
[2024-06-26 15:03:51.938] I/TC: Secondary CPU 6 initializing
[2024-06-26 15:03:51.958] I/TC: Secondary CPU 6 switching to normal world boot
[2024-06-26 15:03:51.979] I/TC: Secondary CPU 7 initializing
[2024-06-26 15:03:51.979] I/TC: Secondary CPU 7 switching to normal world boot
[2024-06-26 15:03:52.407] ��[    0.599337] tegra_dc_assign_hw_data: no matching compatible node
[2024-06-26 15:03:52.413] [    0.605361] tegradccommon module_init failed
[2024-06-26 15:03:52.413] [    0.609648] tegradc module_init failed
[2024-06-26 15:03:53.585] ��I/TC: Reserved shared memory is disabled
[2024-06-26 15:03:53.585] I/TC: Dynamic shared memory is enabled
[2024-06-26 15:03:53.605] I/TC: Normal World virtualization support is disabled
[2024-06-26 15:03:53.625] I/TC: Asynchronous notifications are disabled
[2024-06-26 15:03:56.473] ��[    4.677304] sd 0:0:0:0: [sda] No Caching mode page found
[2024-06-26 15:03:56.473] [    4.682786] sd 0:0:0:0: [sda] Assuming drive cache: write through

It gets struck at this point
restoreLog1.txt (101.8 KB)
restoreConsoleLog1.txt (52.8 KB)

@DaveYYY

It worked. I tried restoring it on one of the custom boards that we had and i encountered the above.
When i restored it on the devkit it booted up successfully.
I want to try the same on the custom boards as well.

Of course a image that works on a DevKit will not work on your custom boards…
If you know you have to customize the BSP for it to work on custom boards, then you should also know you don’t mix the backup image for these two.

1 Like

@DaveYYY

The l4t_backup_retsore tools works fine. Thanks.

Is it possible to take backup of only a certain partition and flash only that? For example; my current Image is 120Gb , can i make two partitions 30 GB and 90 GB , take backup of 30 GB and restore only 30 GB?

Please advise.

We only support backup/restoring the entire device.

1 Like

@DaveYYY

While restoring the image back to Orin , i encounter this issue… mount.nfs : connection timed out

***************************************
*                                     *
*  Step 3: Start the flashing process *
*                                     *
***************************************
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for device to expose ssh ......RTNETLINK answers: File exists
RTNETLINK answers: File exists
Device has booted into initrd. You can ssh to the target by the command:
$ ssh root@fe80::1%enx1e4abab0f024
Cleaning up...
Log is saved to Linux_for_Tegra/initrdlog/flash_3-1_0_20240716-115608.log 
Run command: 
ln -s /proc/self/fd /dev/fd && mount -o nolock [fc00:1:1::1]:/media/9e0f5b47-dd2c-451a-99c3-4d6d926e11f3/Linux_for_Tegra/tools/backup_restore /mnt && /mnt/nvrestore_partitions.sh -e sda -n 
 on root@fc00:1:1::2
mount.nfs: Connection timed out

Disable the firewall on your host PC.

1 Like

@DaveYYY

I was able to restore image on many of the boards but on one particular board of the same kind i encounter the following:

Device has booted into initrd. You can ssh to the target by the command:
$ ssh root@fe80::1%enx02c20fa03bbd
Cleaning up...
Log is saved to Linux_for_Tegra/initrdlog/flash_3-4_0_20240729-105524.log 
Run command: 
ln -s /proc/self/fd /dev/fd && mount -o nolock [fc00:1:1::1]:/media/sanya/9e0f5b47-dd2c-451a-99c3-4d6d926e11f3/Linux_for_Tegra/tools/backup_restore /mnt && /mnt/nvrestore_partitions.sh -e sda -n 
 on root@fc00:1:1::2
/mnt/images ~
nvrestore_partitions.sh: Use the default nvpartitionmap.txt as the index file.
nvrestore_partitions.sh: You are trying to flash images from a board model that does not
match the current board you're flashing onto.

I am able to flash this board with the basic flash command without any issues:

sudo ADDITIONAL_DTB_OVERLAY_OPT="BootOrderUsb.dtbo" ./tools/kernel_flash/l4t_initrd_flash.sh --external-device sda1 -c tools/kernel_flash/flash_l4t_external.xml -p "-c bootloader/t186ref/cfg/flash_t234_qspi.xml" --showlogs --network usb0 jetson-orin-nano-devkit internal

But while restoring the image , i get the above message.

You can comment out or delete the code that checks model types in nvrestore_partitions.sh.

1 Like

@DaveYYY

thanks! it worked.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.