since the L4T does not support OTA update with AB Redundancy enabled I am going to implement it myself by exchanging the contents of the rootfs.
When using nvbootctrl I can set the active SLOT. I tried to delete the whole rootfs from the active slot and it automatically booted the other one after 3 attempts, that is correct.
1. Can I reduce the time which is waited after the kernel panic? It seems very slow.
After that I thought I could mount the other partition and copy a working rootfs in there. The active slot is still the other one and the dump now tells me:
Current rootfs slot: B
Active rootfs slot: A
num_slots: 2
slot: 0, retry_count: 0, status: unbootable
slot: 1, retry_count: 3, status: normal
When I reboot the slot 0 is not tried anymore because it has the status unbootable. 2. How do I reset the bootable status so that I can attempt to boot it again?
did you meant the Image-based OTA with rootfs A/B enabled? that’s right, it’s currently not supported on l4t-r35.1
it’s the configuration, MAX_ROOTFS_AB_RETRY_COUNT.
please check your SMD config, it’s by default as 3, you may modify the config for testing.
you’ll need to enable nvbootctrl to mark the slot as active.
please note that, there’s -t options to determine the targets, bootloader or rootfs, the default options is bootloader
i.e. $ sudo nvbootctrl [-t <target>] <command>
did you meant the Image-based OTA with rootfs A/B enabled? that’s right, it’s currently not supported on l4t-r35.1
When can we expect it? We need it.
I have the slot 0 which was marked as “unbootable” after the system tried to boot it MAX_ROOTFS_AB_RETRY_COUNT times.
I copied a new rootfs into it and want to set it “bootable” again.
Is this possible? the nvbootctrl does not seem to offer an option for it.
It seems like the rootfs slot is forever lost once it used all retry counts?
Okay so you tell me that the new Jetpack Release is totally useless for us?
We are preparing a product and expected the promises of the developer previews to be followed by a working jetpack…
Hey @JerryChang
Seems you need to reset bits 0+1 and 14+15 in /sys/firmware/efi/efivars/RootFsInfo… to remove the “unbootable” status
I saw you use the bootctrl HAL which is also used in android. That tool surprisingly does also not offer the ability to reset the “unbootable” flag. Are you going to implement it in your tool or should I just reset the value in the efivars myself?
FYI,
we’ve plan to create separated UEFI variables for rootfs A/B status, so user may rest them in UEFI menu.
it’s schedule for next JetPack public release.
thanks
then we’ll have to implement the A/B system outselves, we can’t wait that long to proceed with production.
In case we take the approach below. Can we expect it to still work in the future?
Mount current root and nextroot under /mnt/root and /mnt/nextroot
To create a copy of the current system tar the content of the /mnt/root
To update a system to the backed up state, copy the tar content to /mnt/nextroot and switch the uefi variables to the other slot
In order to proceed with this workflow we’d have the possibility to generate a rootfs image from the l4t package.
I know that the flash.sh creates an image from the rootfs folder in order to flash the system. Where can we find the final file which is used to flash the system? We’d tar that one and use it as an update image.
Thanks @JerryChang I have been able to mount the system.img. Everything is there. That should work.
For ROOTFS A/B the scripts also create a system_b.img file which is sligtly bigger.
What is the difference between the files? Will I have any issues when I create a tar only from the system.img and extract it to the slot B?
So my fear is that some content is necessarily placed on Partition A and some other on Partition B.
If that would the case I could not simply use the content of “system.img” to create my update file as I’d only be able to flash that on Partition A while Partition B is active. It would be a bit weird to ask the customer on which partition his system currently is to find out which update package to send to him.
we can see different file size for A/B partition images, but they’re same with the raw image.
it’s mksparse utility to create a compressed version of the root file system, it’s only a smaller image size to speed-up image flashing.
i.e. system.img.raw → system.img.
since I can also see both A/B partition raw images with the same file size, you may ignore the difference.