Error running apply_binaries.sh

Just for background I’m running a Jetson Xavier AGX 101.
I have flashed it with a Balena-OS image (though I don’t think this issue is balena specific).
I am trying to install the linux_for_tegra drivers/tools in a docker container running on the device.
The docker container is based on nvidias l4t base image https://ngc.nvidia.com/catalog/containers/nvidia:l4t-base.

I am not using Jetpack/SDKManager to install linux_for_tegra, rather I copied the linux_for_tegra folder over to the container and am running apply_binaries.sh on device.

When running apply_binaries.sh I run into:
Unpacking nvidia-l4t-firmware (32.4.3-20200625213407) …
dpkg: error processing archive /tegra/Linux_for_Tegra/nv_tegra/l4t_deb_packages/nvidia-l4t-firmware_32.4.3-20200625213407_arm64.deb (–unpack):
unable to clean up mess surrounding ‘./lib/firmware/nv-WIFI-Version’ before installing another version: Read-only file system
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)

I see the same error when trying to manually install this package with dpkg -i.
I do NOT see the same error with other packages in the same directory. For example nvidia-l4t-core_32.4.3-20200625213407_arm64.deb installs fine.

Its only nvidia-l4t-firmware_32.4.3-20200625213407_arm64.deb so far that is hitting this error.

I see the nv-WIFI-Version file within the linux_for_tegra folder, but no idea what the read-only complaint means - googling this error only suggested very exotic issues.

root@b202bef:/tegra/Linux_for_Tegra/nv_tegra# find ../ -name '*nv-WIFI-Version*'
../nv_tegra/nvidia_drivers/lib/firmware/nv-WIFI-Version
root@b202bef:/tegra/Linux_for_Tegra/nv_tegra# ls -la ../nv_tegra/nvidia_drivers/lib/firmware/
total 366
drwxr-xr-x 7 root root   1024 Aug 31 01:44 .
drwxr-xr-x 3 root root   1024 Jun 26 04:34 ..
-rw-r--r-- 1 root root  65959 Jun 26 04:33 bcm4354.hcd
drwxr-xr-x 2 root root   1024 Jun 26 04:34 brcm
drwxr-xr-x 2 root root   1024 Jun 26 04:34 gp10b
drwxr-xr-x 2 root root   1024 Jun 26 04:34 gv11b
-rw-r--r-- 1 root root     78 Jun 26 04:33 nv-BT-Version
-rw-r--r-- 1 root root     82 Jun 26 04:33 nv-WIFI-Version
-rw-r--r-- 1 root root   2000 Jun 26 04:33 rtl8822_setting.bin
-rw-r--r-- 1 root root      6 Jun 26 04:33 rtl8822cu_config
-rw-r--r-- 1 root root  45880 Jun 26 04:33 rtl8822cu_fw
drwxr-xr-x 2 root root   1024 Jun 26 04:34 tegra18x
-rw-r--r-- 1 root root 124416 Jun 26 04:33 tegra18x_xusb_firmware
drwxr-xr-x 2 root root   1024 Jun 26 04:34 tegra19x
-rw-r--r-- 1 root root 124928 Jun 26 04:33 tegra19x_xusb_firmware

Is this a known issue? Or am I missing something?

(PS the container I’m running is also based on a balena base image of bionic (18.04))

Do you run the apply_binaries.sh by sudo?
Could you check if can delete the …/Linux_for_Tegra/rootfs/lib/firmware/nv-WIFI-Version

Sudo made no difference and the Linux_for_Tegra/rootfs/lib folder doesn’t exit.

Could this be related to running these commands in a container? I will try performing the apply_binaries in a docker RUN command.

I think using the base image l4t-base is not useful in my case since my flashed image is not the standard linux_for_tegra rootfs but rather a custom Balena image which I suspect only has the minimal linux_for_tegra components to boot the board. So since the l4t-base image only exposes the l4t tools into the container that are installed on the host - and since my Balena image likely lacks the majority of those this probably accomplished nothing.

For your case you may remove the file Linux_for_Tegra/nv_tegra/l4t_deb_packages/nvidia-l4t-firmware_32.4.3-20200625213407_arm64.deb to try.

I can try, but I believe the other packages depend on it.

Yeah other .deb’s depend on it. If I remove nv-apply-debs.sh completely then apply_binaries.sh completes; but of course then I don’t have what I need.

Something you may find useful, but not in any particular order…

The content of the “Linux_for_Tegra/” directory is the “driver package”. SDKM is merely a front end which downloads this, but you can also find the driver package available for separate download. This is the actual flash software.

Within “Linux_for_Tegra/” is subdirectory “Linux_for_Tegra/rootfs/”. This content is not from the driver package. The default is that the “sample root filesystem” content is unpacked there using sudo. SDKM would also download and unpack this, but I think this occurs only if you actually try to flash once. If you have not flashed once, or have not manually added the sample rootfs, then the “rootfs/lib/” content would be missing.

One only runs “sudo ./apply_binaries.sh” after the sample rootfs is in place, so you might check if the sample rootfs was correctly placed.

For reference, you can look for your L4T version here (requires going there, logging in, and then clicking the link again), and look for “driver package” and “sample root filesystem” if you want to do this manually without using SDKM:
https://developer.nvidia.com/linux-tegra

You can delete the content of the “rootfs/” at any time and unpack the sample rootfs again (using sudo) if you wish, followed by “sudo ./apply_binaries.sh”. Then “rootfs/lib/” should be present.

Your issue could of course be something else, but you should check first if the sample rootfs was installed.

Thanks yeah this is useful information.

I think what I’m doing is pretty wierd, I’m copying the Linux_for_Tegra folder onto the Jetson device into a docker container and from there i’m running apply_binaries.sh -r / to apply them to the docker containers “live” rootfs.
Had this working for the tx2 - although in that version there was no nv-apply-debs.sh - and its those deps that i’m running into issues installing with this read-only error mentioned above.

The issue was that I’m trying to run this in a docker container on the xavier and the docker container has /lib/firmware set as read-only.
When I issue these commands from in the dockerfile during the docker build stage then they don’t encounter this error.

I do hit another error, but i’m going to create another post and refernce this for clarity.

In slightly older L4T releases “apply_binaries.sh” merely unpacked the correct tar files to the correct location. In newer releases the tar files are instead “.deb” packages, and the “dpkg” tool is used to install them. Because this is intended to run on a PC this implies the PC has to use QEMU to pretend it is aarch64/arm64…if this is still in the apply_binaries.sh scripting, then your Jetson docker container is using QEMU to pretend it is the native arm64/aarch64.

I suppose if your docker container is pretending to be x86_64/amd64, then it works perfectly well to have the Jetson pretend to be amd64/x86_64 and run QEMU to pretend it is arm64/aarch64 (I think there must be a joke in there somewhere, like the old Abbott and Costello comedy routine).

Perhaps the difference between when it used to work versus now is QEMU involvement.