Dear nvidia team:
It takes a long time when we use the mass flash command(sudo ./tools/kernel_flash/l4t_initrd_flash.sh --flash-only --network usb0 --massflash 2) to flash our JP6.0 custom carrier board(flash about 120 minutes) when compared to our JP5.1.1 custom carrier board(flash about 30 minutes).
From the JP6.0 log,we can see mmcblk0p1_bak.img takes a long time.Because it includes all the totol partition.
60635852+0 records in
60635852+0 records out
62091112448 bytes (62 GB, 58 GiB) copied, 7707.3 s, 8.1 MB/s
Writing mmcblk0p1 partition done
JP6.0_mass_flash_long_time_log.txt (12.6 KB)
JP5.1.1_mass_flash_OK.log (22.8 KB)
Their backup procedures are exactly the same.What’s the differece?
Backup steps:
sudo ./tools/backup_restore/l4t_backup_restore.sh -b -c b600_32G-agx-orin
sudo ./tools/kernel_flash/l4t_initrd_flash.sh --use-backup-image --no-flash --network usb0 --massflash 2 b600_32G-agx-orin mmcblk0p1
Hi,
If the device cannot be flashed/booted, please refer to the page to get uart log from the device:
Jetson/General debug - eLinux.org
And get logs of host PC and Jetson device for reference. If you are using custom board, you can compare uart log of developer kit and custom board to get more information.
Also please check FAQs:
Jetson AGX Orin FAQ
If possible, we would suggest follow quick start in developer guide to re-flash the system:
Quick Start — NVIDIA Jetson Linux Developer Guide 1 documentation
And see if the issue still persists on a clean-flashed system.
Thanks!
Hi 592803276,
Could you reproduce the same issue on the devkit?
Have you tried using l4t_backup_restore.sh
to restore and check the flash time?
Hi 592803276:
Backup function in JP 6.0 got an issue. Please refer to Using the initrd tool to flash the image extracted from backup_restore - #21 by DaveYYY
It will dd
whole partition in APP partition, leads to a very long time for backup.
But I never tried to mass flash jetson series, just an info here.
Thank you very much for your response.I’ll try the method you suggested.
Or you can just verify with the latest R36.4.0, which should include that fix.