Yes, I have tested it before and switching slots works, but the problem is when the other slot is broken (typically in a scenario where the second slot was updated). If the second slot is broken and you have switched to boot from it in the next boot, the platform couldn’t recognize that it’s broken and that it should fallback after three trials to the last bootable slot. This mechanism is called fail-over mechanism for rootfs and is found in the documentation of L4T 35.2.1 here: fail-over rootfs.
I have also set ROOTFS_AB, yes.
My problem is that mark-slot-successful / unsuccessful is not any more an option for nvbootctrl since L4T 35.1.
In the beginning of the post, I mentioned a workaround solution that you proposed in another issue, to write in the efivars some value to RoofsInfo. Now, this file is not there any more. I guess it is substituted by RoorfsStatusSlotA and B. Makes sense, but I dont get the information that should be there.
By default, is set to 0007 0000… which I guess means “bootable”. You recommended to write “\x07\x00\x00\x00\x3c\xc0\x01\x00” for setting the Status to normal.
What is the difference? Should I do that for my update system? Which is the value for “unbootable”?
Is that documented somewhere?
Is there an easier/cleaner way to update this values?
How can I set a rootfs slot as bootable / unbootable?
ahh… yes, you’ll need to use capsule update.
please see-also this discussion thread, Topic 243516.
As far as I understand the capsule update is meant for Bootloader/Kernel updates. I want to update the rootfs, but want to make sure that if anything goes wrong with the updated rootfs slot, the bootloader will select the last working rootfs partition. After zeroing the active rootfs slot, the kernel hangs with an error “exFAT-fs (mmcblk0p2): invalid boot record signature” , but never reboots or tries to boot the other rootfs slot. It just freezes with mount fail error.
Did you test the failover rootfs mechanism from l4t 35.2.1 and did it work for you? If yes, can you please elaborate on it?
I can confirm that with 5.0.2 the failover worked.
With 5.1 it does not seem to work anymore. My system hangs forever in a kernel panic when I remove the content of the active rootfs entirely.
5.0.2 was using cboot, right? I think we are in a completely different world now with EFI. For me it is not working either. I am building my own distribution using yocto and the meta-tegra layer from the colleages from github. It is not everything exactly the same as for the ubuntu distribution, so I try to isolate the problems from L4T and what i get from meta-tegra.
For the versions with cboot I had everything working, but the kernel version was not the latest so I updated to L4t 35.1. I think there it was still working for me the failover mechanism. What I have been missing is anyway a fully functional nvbootctrl tool, so I updated again to 35.2.1 which looks much better, but I am still missing a way to set Slots to normal or unbootable.
Since 5.x it was using uefi already. Failover worked with uefi in the last version.
Thanks for the confirmation.
Thanks for the confirmation. I think 5.0.2 was using efi not cboot.
the installation of the rootfs is not the problem at the moment, but I need to be able to set intentionally if a Slot is bootable or unbootable.
Is that not possible any more?
Is there somewhere documented how are you working with the esp partition? I see that nvbootctrl creates a file on the esp partition with values representing the efi variables. For example for set-boot-slot 1, it will create a file with a 1, when normally is set to 0. Which variable and which values should be updated for setting a slot as bootable or unbootable?
Another example is the capsule update you are linking. It also writes in another new file that is has to be created, and I guess this setting of the bit2 is telling EFI to install the capsule.
What other handlings are implemented and where can I find them?
Is it Rootfs AB update working fully functional? I mean, is it tested that you can update from A to B and again from B to A? As the others say, seems that the failover machanism to come back is not working properly.
it’s unable to set Rootfs A/B as bootable /unbootable in r35.
If you want to switch the rootfs AB slot, you can run
$ nvbootctrl -t rootfs set-active-boot-slot <slot>.
for the use-case to update rootfs when enabling rootfs AB, you need to implement the image-based OTA,
however, this is currently not supported and it’s planned for next public release. i.e. l4t-r35.3.
BTW, Capsule update will not touch the rootfs, it only updates bootloader components.
Thanks for the important information!
Thank you Jerry, thats important information I needed.
Just 2 last things related for my debugging.
1: In which Efi Var is it managed the Retry Count for the bootloader after reboot?
2: Where is it L4TLoader checking to know from which Slot should it boot? I have a bug on my system where it switches from A to B but not from B to A, and BootchainFwStatus is looking good when I set to go back to A (with 0s). I am aware it is introduced by me but hopefully your development team can help me with this information.
it should retry for 3 times and switch to another rootfs slot automatically.
Normally, if one rootfs slot is unbootable, the kernel watchdog will reboot the device, and if fails to boot into it for consecutive 3 times, then the UEFI try to boot from another rootfs slot.
The logic is that we saves this failed boot status in the scratch register, once it reaches to 3, the UEFI will switch the slot and update the slot status to the UEFI variables, then the device can boot from the new slot.
Okay, please tell me if I understand it correctly:
- We have an A/B mechanism, which SHOULD (actually does not) start the system on the other slot, in the case a slot becomes broken.
- We have no mechanism to mark the broken slot as bootable again, after repairing it.
How should that be of any use? The whole idea of the A/B is to have a failsafe update mechanism. In the case you describe it is just 1 try better than a single A system because if you fail the update you’re down to 1 slot. If you fail again you have…let me count…0???
In L4T 5.0.2 where it should not have worked at all you were able to come up with a solution that was perfectly working. Now with L4T 5.1 we have an officially working solution which does not work at all?
Something is going terribly wrong here?
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.