[Orin Nano/NX] boot to OTA and need to reflash QSPI

Hi,

I recently facing the issue:

I can understand the failsafe mechanism, but why we just reboot device(retry)?
Or hangs and report, then we can use the external device and make it work.

It happens about once a month(different SOM), and we need to reflash the qspi using L4T.
We still can’t find the root cause of this issue, it make us feel unconfident for publishing this product.
Because some of our customer won’t use L4T to develop, they will struggle on recovering the qspi.

How can we avoid or skip this mechanism, or disable the OTA/ AB partition.

Or there is a way to build an OTA server to reflash the qspi automatically once it happends?
If does, how can we trigger this issue and test?

May I know what’s your issue on your board?

You should just make sure nv-l4t-bootloader-config.service is ready before you run the reboot command.
What’s you use case for your product?
Are you doing successive reboot for your product?

Even if you run into recovery boot, you could switch back to normal boot w/o reflash QSPI.

You can do this to get back to normal kernel boot:

device failed boot to rootfs, and goes to OTA. Need to reflash qspi.

If no and reboot, goes to this situation, am I right?

It’s IPC, but it’s pretty common test: keep rebooting test.

Can we just disable this service and reboot create this situation?

If does, I would try this out.

Disable and stop the nv-l4t-bootloader-config.service won’t goes to this situation.

But we switch on/ off on UEFI → OS chain A status.

But I still don’t know why this happens when we not change UEFI → OS chain A status.

It is not OTA. It is recovery boot state.

Once you achieve the ROOTFS_RETRY_COUNT_MAX.

Please check if the nv-l4t-bootloader-config.service is ready before you run reboot command in your test.

Please don’t disable this service.

It’s a fail-over mechanism, change the OS chain A status would make your board knows that the slot-A is working fine, and it would try to boot from it.

Setting the ROOTFS_RETRY_COUNT_MAX seems works:

+ '[' -f /home/joezhang/workspace/TEV-Jetson_Jetpack_script/TEK6070-ORIN_20231213/Linux_for_Tegra/bootloader/L4TConfiguration.dtbo ']'
+ process_l4t_conf_dtbo
+ local a_node=
+ '[' 0 == 1 ']'
+ local retry_count_max=
+ '[' -n 3 ']'
+ '[' 3 -ge 1 ']'
+ '[' 3 -le 3 ']'
++ printf %02x 3
+ retry_count_max=03
+ a_node='{data = [ 03 00 00 00 ];runtime;locked;};'
+ update_overlay_dtb /home/joezhang/workspace/TEV-Jetson_Jetpack_script/TEK6070-ORIN_20231213/Linux_for_Tegra/bootloader/L4TConfiguration.dtbo RootfsRetryCountMax '{data = [ 03 00 00 00 ];runtime;locked;};'
+ local dtb_file=/home/joezhang/workspace/TEV-Jetson_Jetpack_script/TEK6070-ORIN_20231213/Linux_for_Tegra/bootloader/L4TConfiguration.dtbo
+ local name=RootfsRetryCountMax
+ local 'node={data = [ 03 00 00 00 ];runtime;locked;};'

Does this count still stacks even I boot successfully into prompt?
If does, how can I check and clear it?

We tried to disable and stop this service and create this issue. But failed.
How can I manually trigger this issue and continue debug(not change UEFI → OS chain A status)?

No, it wouldn’t count up if you boot successfully and that service run as expected.

May I know what’s your use case now?
Do you want to enter the state of recovery boot?

OK sounds great.

I want to test how “frequently” does this issue happens, and what cause this,
Then we can set the criteria of ‘reboot test’ .
Or even create a ‘save reboot’ mechanism to prevent this.

This behavior is caused from you run reboot command before nv-l4t-bootloader-config.service gets ready so that if you check the state of this service ready in your reboot test and you won’t hit this issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.