L4T 35.1.0 nvbootctrl mark as bootable

Hello,

since the L4T does not support OTA update with AB Redundancy enabled I am going to implement it myself by exchanging the contents of the rootfs.

When using nvbootctrl I can set the active SLOT. I tried to delete the whole rootfs from the active slot and it automatically booted the other one after 3 attempts, that is correct.

1. Can I reduce the time which is waited after the kernel panic? It seems very slow.

After that I thought I could mount the other partition and copy a working rootfs in there. The active slot is still the other one and the dump now tells me:

Current rootfs slot: B
Active rootfs slot: A
num_slots: 2
slot: 0, retry_count: 0, status: unbootable
slot: 1, retry_count: 3, status: normal

When I reboot the slot 0 is not tried anymore because it has the status unbootable.
2. How do I reset the bootable status so that I can attempt to boot it again?

did you meant the Image-based OTA with rootfs A/B enabled? that’s right, it’s currently not supported on l4t-r35.1

it’s the configuration, MAX_ROOTFS_AB_RETRY_COUNT.
please check your SMD config, it’s by default as 3, you may modify the config for testing.

you’ll need to enable nvbootctrl to mark the slot as active.
please note that, there’s -t options to determine the targets, bootloader or rootfs, the default options is bootloader
i.e. $ sudo nvbootctrl [-t <target>] <command>

Hey @JerryChang

that does not quite answer my question.

did you meant the Image-based OTA with rootfs A/B enabled? that’s right, it’s currently not supported on l4t-r35.1

When can we expect it? We need it.

I have the slot 0 which was marked as “unbootable” after the system tried to boot it MAX_ROOTFS_AB_RETRY_COUNT times.
I copied a new rootfs into it and want to set it “bootable” again.
Is this possible? the nvbootctrl does not seem to offer an option for it.
It seems like the rootfs slot is forever lost once it used all retry counts?

may I know what’s the dump-slots-info shows, thanks

As I initially posted:

I cannot get rid of the retry_count or the status: unbootable

hello seeky15,

please check Root File System Redundancy, Rootfs update and customization is not supported in current release.

Okay so you tell me that the new Jetpack Release is totally useless for us?
We are preparing a product and expected the promises of the developer previews to be followed by a working jetpack…

WHEN will it be supported?

Hey @JerryChang
Seems you need to reset bits 0+1 and 14+15 in /sys/firmware/efi/efivars/RootFsInfo… to remove the “unbootable” status

I saw you use the bootctrl HAL which is also used in android. That tool surprisingly does also not offer the ability to reset the “unbootable” flag. Are you going to implement it in your tool or should I just reset the value in the efivars myself?

hello seeky15,

please check below steps to update the RootFS info variable.
this should update the slot-A status to normal after system reboot.
for example,

# cd /sys/firmware/efi/efivars/
# printf "\x07\x00\x00\x00" > /tmp/var_tmp.bin
# printf "\x3c\xc0\x01\x00" >> /tmp/var_tmp.bin
# chattr -i RootfsInfo-781e084c-a330-417c-b678-38e696380cb9
# dd if=/tmp/var_tmp.bin of=RootfsInfo-781e084c-a330-417c-b678-38e696380cb9; sync
# chattr +i RootfsInfo-781e084c-a330-417c-b678-38e696380cb9
# reboot

FYI,
we’ve plan to create separated UEFI variables for rootfs A/B status, so user may rest them in UEFI menu.
it’s schedule for next JetPack public release.
thanks

Hello @JerryChang

sorry for my late reply due to holidays.
That is very appreciated. It is equal to the script I have been able to figure myself, seems to be correct.

When can we expect the next public release?

hello seeky15,

I cannot claim a solid release date, please expect it’ll be end of 2022 or early 2023.

Thank you @JerryChang ,

then we’ll have to implement the A/B system outselves, we can’t wait that long to proceed with production.

In case we take the approach below. Can we expect it to still work in the future?

  • Mount current root and nextroot under /mnt/root and /mnt/nextroot
  • To create a copy of the current system tar the content of the /mnt/root
  • To update a system to the backed up state, copy the tar content to /mnt/nextroot and switch the uefi variables to the other slot

In order to proceed with this workflow we’d have the possibility to generate a rootfs image from the l4t package.
I know that the flash.sh creates an image from the rootfs folder in order to flash the system. Where can we find the final file which is used to flash the system? We’d tar that one and use it as an update image.

Thanks for your answers!

it’s $OUT/Linux_for_Tegra/bootloader/system.img as rootfs for flashing to your target.

Thanks @JerryChang I have been able to mount the system.img. Everything is there. That should work.

For ROOTFS A/B the scripts also create a system_b.img file which is sligtly bigger.
What is the difference between the files? Will I have any issues when I create a tar only from the system.img and extract it to the slot B?

hello seeky15,

I did not catch you, may I know which script you’re talking about here. thanks

@JerryChang my apologies.

When you use flash.sh or initrd_flash.sh the files system.img and system_b.img are created. The files have different sizes.

So my fear is that some content is necessarily placed on Partition A and some other on Partition B.

If that would the case I could not simply use the content of “system.img” to create my update file as I’d only be able to flash that on Partition A while Partition B is active. It would be a bit weird to ask the customer on which partition his system currently is to find out which update package to send to him.

hello seeky15,

we can see different file size for A/B partition images, but they’re same with the raw image.

it’s mksparse utility to create a compressed version of the root file system, it’s only a smaller image size to speed-up image flashing.
i.e. system.img.rawsystem.img.
since I can also see both A/B partition raw images with the same file size, you may ignore the difference.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.