Orin Nano apt get not working with AB Booting enabled

Hello I am having some issues with AB booting on my 8G Orin Nano on the Devkit.
It is the module p3767-0005 on the P3768-0000 carrier board. I have also replicated the same issue on the commercial module module p3767-0005. I am on JP6.0 rev 2

The Issue I am having is I cant get apt upgrade or install to work when I have AB booting enabled. When I flash without AB booting apt upgrade and install work fine. I have attached the output from my apt get commands to this post in aptUpgradeLog.txt
aptUpgradeLog.txt (6.1 KB)

is this the expected behavior?

If I want to install software e.g. apt get install nano do I need to use Image-Based OTA instead?

hello matt.read,

may I also know what’s the $ apt list --upgradable report?
please see-also developer guide, Updating from the NVIDIA APT Server.

Hi @JerryChang here is my list:

user@NANO-XXXX:~$ sudo apt list --upgradable
Listing... Done
nvidia-l4t-core/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]
N: There are 2 additional versions. Please use the '-a' switch to see them.
user@NANO-XXXX:~$

hello matt.read,

it’s only one upgradable package reported?
there should be several upgradable packages when I’m checking with r36.3 release.
for instance,

nvidia-l4t-3d-core/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]                                                                        
nvidia-l4t-camera/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]                                                                         
nvidia-l4t-core/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]                                                                           
nvidia-l4t-cuda/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]                                                                           
nvidia-l4t-dla-compiler/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]                                                                   
nvidia-l4t-firmware/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]                                                                       
nvidia-l4t-gbm/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]                                                                            
nvidia-l4t-graphics-demos/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]                                                                 
nvidia-l4t-gstreamer/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]                                                                      
nvidia-l4t-init/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]                                                                           
nvidia-l4t-jetsonpower-gui-tools/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]                                                          
nvidia-l4t-multimedia-utils/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]                                                               
nvidia-l4t-multimedia/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]     
...

did you working with native Jetpack-6.0 public release?

Hi @JerryChang

Yes it appears to be:
nvidia-l4t-core/stable 36.3.0-20240719161631 arm64 [upgradable from: 36.3.0-20240506102626]
I’m not sure what the 2 additional versions are

I am using this release: Jetson Linux 36.3 | NVIDIA Developer

thanks

could you please moving to the latest release version since JetPack 6.1 is available.

Hi @JerryChang, You are saying to move to a newer Jetpack, does this mean that there is a bug in this current JP 6.0 with AB booting?

Should apt upgrade with AB booting enabled work?

I don’t want to upgrade if this is the expected behavior

hello matt.read,

may I double check your steps in details to reproduce the issue?
besides, bootloader redundancy is default enabled, AB booting you meant RootFS-A/B, right? we’ve test RootFS-A/B and it worked normally on r36.3 release version.

Hi @JerryChang, Yes your correct I meant meant RootFS-A/B. My steps are the following on my Ubuntu 22.04.5 LTS host machine:

Make BSP path:

export L4T=~/nvidia/nvidia_sdk/ORIN_NANO
mkdir -p $L4T
cd $L4T

Enable an execution of different multi-architecture containers by QEMU and binfmt_misc:

sudo apt-get install qemu-system
sudo apt-get install qemu-user-static

get bsp:

cd $L4T
sudo wget https://developer.nvidia.com/downloads/embedded/l4t/r36_release_v3.0/release/jetson_linux_r36.3.0_aarch64.tbz2

Extract BSP:

tar xvf $L4T/jetson_linux_r36.3.0_aarch64.tbz2

Fix NVIDIA zstd bug. NVIDIA switched to using zstd for compressing the system image when flashing, but the sample rootfs package list has not been updated accordingly, so the image fails to be unarchived. To fix:

sed -i '/libzstd1/a zstd' $L4T/tools/samplefs/nvubuntu-jammy-basic-aarch64-packages
cat $L4T/tools/samplefs/nvubuntu-jammy-basic-aarch64-packages | grep "zstd"

you should now have zstd in the package list.

Generate the NVIDIA basic root file system of Ubuntu 22.04.4 LTS (Jammy Jellyfish):

sudo $L4T/tools/samplefs/nv_build_samplefs.sh --abi aarch64 --distro ubuntu --flavor basic --version jammy

Assemble the ROOTFS that was just generated:

sudo rm -r $L4T/rootfs/*
sudo tar xvpf $L4T/tools/samplefs/sample_fs.tbz2 -C $L4T/rootfs/
sudo $L4T/tools/l4t_flash_prerequisites.sh
sudo $L4T/apply_binaries.sh

Create user to skip OEM setup:

sudo $L4T/tools/l4t_create_default_user.sh -u user -p USER_PASSWORD -n ORIN-NANO-XXXX --accept-license

to flash:

cd $L4T
sudo ROOTFS_AB=1 ROOTFS_RETRY_COUNT_MAX=3 ./tools/kernel_flash/l4t_initrd_flash.sh --external-device nvme0n1p1 --erase-all -c tools/kernel_flash/flash_l4t_t234_nvme_rootfs_ab.xml -p "-c bootloader/generic/cfg/flash_t234_qspi.xml" --showlogs --network usb0 jetson-orin-nano-devkit internal

I should note that I have also tested on the normal file system and NOT the basic root file system and I get the same error.