Jetson Orin Nano does not boot after flashing Jetpack 6

How did you dump this log out?

That’s the log from the console during flashing (not during booting).

Then try another method to flash here.

If even this one cannot work, then it really could be board issue.

You can start from first workaround.

Thanks Wayne, I will try the First Workaround later this day (need to go somewhere for now). I will send you an update here!

1 Like

Hi Wayne,

I tried the First Workaround. I don’t know which controller my SSD is on, so I tried both. The command for the C7 controller produced an error early in the flashing process, so I only show the log for the C4 controller:

  • flash_internal_concole_c4.log (23.0 KB)

  • I could not find the log from host, because this time I used flash.sh and I don’t if/where the logs are saved

Unfortunately the flashing was not successful due to the errors that can be seen in the logs above. The last error states:

[  12.7661 ]           14	UDA                                 	 1248832	 2068031
[  12.7661 ]           15	reserved                            	 2068032	 3050047
[  12.7661 ] 
[  12.7661 ] Start flashing
[  12.7687 ] tegradevflash_v2 --pt flash.xml.bin --create
[  12.7691 ] Bootloader version 01.00.0000
[  13.1603 ] Erasing spi: 0 ......... [Done]
[  14.1690 ] Writing partition secondary_gpt with gpt_secondary_3_0.bin [ 16896 bytes ]
[  14.1699 ] [................................................] 100%
[  14.1742 ] 000000004d4d2c01: E> NV3P_SERVER: Failed to initialize partition table from GPT.
[  14.4068 ] 
[  14.4069 ] 
Error: Return value 1
Command tegradevflash_v2 --pt flash.xml.bin --create
Failed flashing generic.

The QSPI fails to erase. It’s easy to see in the log above as the (obviously incorrect) erase operation takes 1s when in reality it should take much longer (maybe even up to 30 seconds? I don’t recall from the top of my head now)

Same problem as described here: Flash.sh (and tegraflash.py) fails to erase QSPI on non-SDK modules (P3767 Orin Nano 8GB)

The problem is that the protection bits are set in the QSPI (This is factory default according to the datasheet). And the code in mb2 not only fails to clear these bits before erasing, nor does it check if the erase completes successfully or fails.

The workaround is to boot into Linux which will just immediately clear those protection bits in the QSPI. Then they never get set again and you can reflash it as many times you want from mb2.

Now, the problem is that QSPI is only exposed to Linux if you boot Linux directly from recovery mode (RCM). If Linux is booted from the QSPI flash the flash itself it not visible to Linux. Unclear to me why this is the case.

The l4t_initrd_flash.sh script will do this and use Linux to flash the QSPI in which case you would never see this problem.

@andrea6
I wonder if it is still that case. If I understand the comment correctly, @r.a.lekkerkerker’s board already flashed with jetpack5 before so that protection you said here should no longer exist?

Hi @WayneWWW @andreas6,
Yes the problems seem similar, but you are right that JetPack 5 was installed on my Orin before.
I tried flashing with l4t_initrd_flash.sh but now I receive the error on the host below:

Waiting for target to boot-up...
Waiting for target to boot-up...
Waiting for device to expose ssh ......RTNETLINK answers: File exists
RTNETLINK answers: File exists
Waiting for device to expose ssh ...Run command: flash on fc00:1:1:0::2
SSH ready
blockdev: cannot open /dev/mmcblk3boot0: No such file or directory
[ 0]: l4t_flash_from_kernel: Starting to create gpt for emmc
Active index file is /mnt/internal/flash.idx
Number of lines is 79
max_index=78
writing item=62, 6:0:primary_gpt, 512, 19968, gpt_primary_6_0.bin, 16896, fixed-<reserved>-0, 9b04535dc3a7abba31395e2eda5f40cae6ade18e
Error: Could not stat device /dev/mmcblk3 - No such file or directory.
Flash failure
Cleaning up...

The console keeps repeating:

[  307.166555] NFS: state manager: check lease failed on NFSv4 server fc00:1:1:0::1 with err3

Hi @WayneWWW,

There has been a breakthrough today. I got my hands on a dedicated Ubuntu 20.04 machine (I have been trying to install JetPack 6.0 from a dedicated Ubuntu 22.04 machine all these times before) and I managed to install JetPack 5.1.2 from it. Encouraged by this small success I decided to try to install JetPack 6.0 from the Ubuntu 20.04 host. And it actually succeeded! I managed to install JetPack 6.0 DP on a Jetson Orin Nano 4GB on a Yahboom SUB carrierboard. The host-log can be downloaded here: flash_1-2_0_20231220-193707.log (51.6 KB)

Two days ago I contacted Yahboom to ask for the custom BSP, but they forwarded me to the original Nvidia BSP so I guess that shouldn’t be the cause of the problems and apparently it wasn’t. Yahboom did advice me to install JetPack 5.1.1 or 5.1.2 because they wrote that JetPack 6.0 is not compatible with their Orin Nano board. I guess I will run into other problems in the future, but at least I can boot the Orin with JetPack 6.0 now!

Thanks again and hopefully this thread can help other people who are trying to install JetPack 6.0

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.