Issue with l4t_backup_restore.sh on Jetson Orin NX 8GB with JetPack 6.0.0

I am using a Jetson Orin NX 8GB development kit and encountered an issue with the l4t_backup_restore.sh script when migrating from JetPack 5.1.3 to JetPack 6.0.0.

On JP5.1.3, I could export the root filesystem and create a backup image using the following command:

sudo ./tools/backup_restore/l4t_backup_restore.sh -e nvme0n1 -b -c jetson-orin-nano-devkit

This successfully exported the root filesystem, and I could create a backup image of the actual used space (~27GB) instead of the full NVMe partition (~256GB).

However, after upgrading to JP6.0.0, I encountered the following issues:

  1. Script execution fails: The same command is no longer usable; the script exits with errors (specific errors can be attached).

  2. Full partition cloning: When attempting to clone the NVMe storage, the script clones the entire partition size (~256GB), rather than just the used space (~27GB). This results in unnecessarily large backup images.

Expected Behavior:

  • On JP6.0.0, I would like to export the root filesystem (27GB used) without cloning the entire 256GB partition.

  • I expect the l4t_backup_restore.sh script to behave similarly to JP5.1.3, efficiently handling only the used data.

Environment:

  • Jetson Orin NX 8GB Dev Kit

  • JetPack 6.0.0 (L4T version)

  • NVMe root filesystem (~256GB partition, ~27GB used)

Request:
Could you provide guidance on how to properly backup and restore the root filesystem on JP6.0.0 without cloning the unused partition space? Is there a recommended update to the l4t_backup_restore.sh script or an alternative procedure for JP6.x?

*** Please note that this reply is generated by LLM automatically ***

The issue you’re experiencing with the script on JetPack 6.0 (JP600) is due to changes in the way the backup-restore tool handles partition sizes. On JP513, the script was able to export the root filesystem and create a backup image of the actual used space (27GB) instead of the full NVMe partition (256GB). However, on JP600, the script clones the entire partition size (256GB) rather than just the used space (27GB).

To properly backup and restore the root filesystem on JP600 without cloning the unused partition space, you can use the dd command with the conv=sparse option. This option allows you to create a sparse file, which only allocates space for the actual data, rather than the full partition size.

Here’s an example of how you can use the dd command to create a backup image of the root filesystem on JP600:

sudo dd if=/dev/nvme0n1p1 of=/mnt/backup.img bs=1M conv=sparse

In this command:

  • if=/dev/nvme0n1p1 specifies the input file (the root filesystem on the NVMe partition)
  • of=/mnt/backup.img specifies the output file (the backup image)
  • bs=1M sets the block size to 1MB
  • conv=sparse enables sparse file creation

This command will create a backup image of the root filesystem that only allocates space for the actual data (27GB) rather than the full partition size (256GB).

Alternatively, you can use the backup-restore tool with the --sparse option to create a sparse backup image. However, this option is not available on JP600, so you’ll need to use the dd command instead.

To restore the backup image, you can use the dd command with the conv=sparse option again:

sudo dd if=/mnt/backup.img of=/dev/nvme0n1p1 bs=1M conv=sparse

In this command:

  • if=/mnt/backup.img specifies the input file (the backup image)
  • of=/dev/nvme0n1p1 specifies the output file (the root filesystem on the NVMe partition)
  • bs=1M sets the block size to 1MB
  • conv=sparse enables sparse file creation

This command will restore the backup image to the root filesystem on the NVMe partition.

In comparison to JP513, the backup-restore tool on JP600 does not have the same functionality for creating sparse backup images. However, using the dd command with the conv=sparse option provides a similar solution for creating and restoring sparse backup images.

In summary, to properly backup and restore the root filesystem on JP600 without cloning the unused partition space, you can use the dd command with the conv=sparse option to create a sparse backup image. This solution allows you to efficiently handle only the used data and avoid cloning the entire partition size.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

Hello NVIDIA team,

I am using a Jetson Orin NX 8GB with JetPack 6.0 (JP600). My device has only a single NVMe drive, and I want to backup only the root filesystem (rootfs), without cloning the entire NVMe partition (which is ~256 GB, while rootfs only uses ~27 GB).

I have tried running l4t_backup_restore.sh and other backup scripts, but:

  • On JP6.0, the script clones the full partition instead of just the used space.

  • Options like --sparse are not available.

My questions:

  1. How can I run the backup/restore script to only copy the root filesystem on a device with a single NVMe drive?

  2. Is there a recommended workflow for creating a backup image of just the used rootfs (~27 GB), without touching the empty space?

  3. Can this be done without booting into recovery/initrd, or is it required to unmount rootfs?

Thank you for your guidance.

Best regards,
Tuan

Hello NVIDIA Team,

Following up on the issue I previously reported regarding nvbackup_partitions.sh not backing up ext4 partitions correctly:

I have identified that the root cause was the isext4() function returning incorrect results due to the way it parsed the output of blkid. After fixing this function, the script now correctly detects ext4 partitions and backs them up using .tar.gz as intended.

After the fix, the tar backup files appear as expected in the backup folder, and the backup logic now works correctly for ext4 partitions while still using dd for non-ext4 partitions. Importantly, the .tar.gz files now reflect the actual used data size of the partitions, rather than the full partition size.

Thank you for your attention, and I hope this information helps for any future updates or documentation improvements.

Best regards,
Tuantm

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.