Jetpack-4.3/sdkmanager/ubuntu16.04 ERROR : File System and OS : chroot: failed to run command 'mount': Exec format error

Hello,

I try to flash a custom TX2 board with Jetpack-4.3 using sdkmanager on a ubuntu 16.04 host. That fails with as first error message :

ERROR : File System and OS : chroot: failed to run command 'mount': Exec format error

sdkmanager kindly asks me to solve that, but what should I do to fix that ?

If “mount” has an exec format error, then it implies the mount command itself is for a different architecture (which is very odd and unexpected). Somehow your host PC is using the wrong “mount” executable (e.g., a PC architecture trying to run arm64 mount directly, or arm64 trying to run a PC architecture mount command directly). Someone else will need to answer, but in preparation for that, you’ll likely need to give an exact description of the host PC and how you installed and then ran the SDK Manager app. A full log would probably also be useful.

Also, name the exact file name installed, and which computer it was installed on.

I am having the same issue. It seems to me something is wrong with the “nv_tegra/nv-apply-debs.sh” script. Look:

This script copies a bunch of OS files to the ~/nvidia/nvidia_sdk/JetPack_4.3_Linux_P3310/Linux_for_Tegra/rootfs directory. These files are binaries for the “ARM aarch64” architecture. Just run something like file ~/nvidia/nvidia_sdk/JetPack_4.3_Linux_P3310/Linux_for_Tegra/rootfs/bin/ls and you’ll see it’s aarch64.

Then in line 195 it tries to chroot . mount -t proc none /proc and fails with the message described by @phdm. This is understandable because after chroot we are trying to run mount which is a binary for a different architecture, not my HOST’s architecture (which is, obviously, x86_64).

In the following lines, I see more chroot commands which means they will be failing too.

Obviously this script was tested in the exact environment like mine and @phdm’s – but it is failing for both of us. What are we missing?

This excerpt of logs seems to confirm what Pavel wrote :

2020-02-29 11:11:45.972 - info: /home/cam5/nvidia/nvidia_sdk/JetPack_4.3_Linux_P3310/Linux_for_Tegra/nv_tegra/nv-apply-debs.sh
2020-02-29 11:11:46.006 - info: Root file system directory is /home/cam5/nvidia/nvidia_sdk/JetPack_4.3_Linux_P3310/Linux_for_Tegra/rootfs
2020-02-29 11:11:46.007 - info: Copying public debian packages to rootfs
2020-02-29 11:11:51.371 - info: Start L4T BSP package installation
2020-02-29 11:11:51.371 - info: QEMU binary is not available, looking for QEMU from host system
2020-02-29 11:11:51.804 - info: Found /usr/bin/qemu-aarch64-static
2020-02-29 11:11:51.805 - info: Installing QEMU binary in rootfs
2020-02-29 11:11:51.879 - info: Installing Jetson OTA server key in rootfs
2020-02-29 11:11:51.897 - info: ~/nvidia/nvidia_sdk/JetPack_4.3_Linux_P3310/Linux_for_Tegra/rootfs ~/nvidia/nvidia_sdk/JetPack_4.3_Linux_P3310/Linux_for_Tegra
2020-02-29 11:11:51.897 - info: Registering Jetson OTA server key
2020-02-29 11:11:53.095 - error: chroot: failed to run command 'mount': Exec format error
2020-02-29 11:11:53.097 - info: exit status 126

I did not read the nv-apply-debs.sh, but could it be run natively on the target instead of running it in a emulator on the host ?

@phdm:

I gave up on the SDK Manager and followed these instructions: https://docs.nvidia.com/jetson/l4t/Tegra%20Linux%20Driver%20Package%20Development%20Guide/quick_start.html#wwpID0E0ND0HA

All shell, all command line – it worked for me.

I haven’t compared the scripts yet so I don’t know what didn’t work in the script from the SDK Manager. I’ll report if I find anything.

Good luck!

No file in the “rootfs/” should ever be run on a host PC. These are entirely for creating an arm64 image. The “Linux_for_Tegra/flash.sh” is the script which does the actual flash (calling some x86_64/amd64 executables). Any reference to using “rootfs/” for any purpose other than creating an image file is invalid since these will end up as content on the Jetson, and are never part of the host PC.

In 4.3, nvidia uses qemu to run ‘apt’ commands inside the apply_binaries.sh on the rootfs stored on the host machine, as if they were run on the target. Those aarch64 binaries should be interpreted by the binfmt mechanism, but although qemu-user-static was installed, the binfmt configuration did not make the link. When looking at the supported binfmt’s I got only

phdm@jetpack-servant:~/Downloads$ update-binfmts --display
python3.5 (enabled):
     package = python3.5
        type = magic
      offset = 0
       magic = \x16\x0d\x0d\x0a
        mask = 
 interpreter = /usr/bin/python3.5
    detector = 
python2.7 (enabled):
     package = python2.7
        type = magic
      offset = 0
       magic = \x03\xf3\x0d\x0a
        mask = 
 interpreter = /usr/bin/python2.7
    detector = 
jar (enabled):
     package = openjdk-8
        type = magic
      offset = 0
       magic = PK\x03\x04
        mask = 
 interpreter = /usr/bin/jexec
    detector =

phdm@jetpack-servant:~/Downloads$

I have solved the problem by installing manually on my host

sudo dpkg -i binfmt-support_2.1.6-1_amd64.deb

and running afterwards

sudo dpkg-reconfigure qemu-user-static

Now listing the supported binfmts shows

... 
qemu-aarch64 (enabled):
     package = qemu-user-static
        type = magic
      offset = 0
       magic = \x7f\x45\x4c\x46\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\xb7\x00
        mask = \xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff
 interpreter = /usr/bin/qemu-aarch64-static
    detector =
...

You are of course correct. The use of QEMU (versus directly unpacking files onto the sample rootfs) is new to the R32.3+ since this release moved to creating much of the previous install from a simple file unpack to a package structure, and any package install requiring QEMU (which missing QEMU) would result in an error.

What surprises me is that the installer was unable to detect the missing QEMU (the “qemu-user-static” package provides “/usr/bin/qemu-aarch64-static”…this should have been the error instead of exec format error). In theory the SDKM package itself should list QEMU as a dependency and be a bit chatty about the missing QEMU during its install. What was the specific command used on your host PC to install the “sdkmanager_1.0.1-5538_amd64.deb” file?

Actually, qemu was not missing, but the setup of the kernel to automatically invoke qemu when encountering a aarch64 executable was missing. That setup can be configured and enabled by the package binfmt-support, but qemu-user-static does not require binfmt-support. If binfmt-support happens to be installed before qemu-user-static, then qemu-user-static will use it to configure the kernel. If binfmt-support is not already installed when one installs qemu-user-static, that configuration does not happen, and does not yield an error. The way to solve that, when binfmt-support is installed after qemu-user-static, is to call

dpkg-reconfigure qemu-user-static

.

Regarding sdkmanager, sdkmanager does not require qemu-user-static, and that’s normal, because sdkmanager does not use qemu. The usage and thus the need for qemu comes from the ‘apply-binaries.sh’ in Jetpack-4.3. And Jetpack-4.3 is not a package, but a tarball, thus has no reuirement :(. IIRC, the combination of sdkmanager and apply-binaries.sh gave an error message telling that qemu-user-static was missing, so I installed it, but as qemu-user-static does not require binfmt-support, and as apply-binaries.sh also does not test for the presence of binfmt-support, it was up to me to solve it :)

A perfect storm of different conditions. Looks like this is a candidate for a fix before the next JetPack/SDK Manager release. Good catch!

The “apply_binaries.sh” script is part of the driver package, but the SDK Manager is intended to be a wrapper for all of this, and it is a guarantee that installing SDKM implies a need to prepare for using the driver package. Thus, it looks like NVIDIA will want a future release of SDKM to depend on both “binfmt-support” and “qemu-user-static”. There were some other requirements sometimes missing, but those are unrelated to this particular thread (“libgconf-2-4”, “libcanberra-gtk-module”, and a particular “python” release).

Thank you linuxdev,

We are already installing “qemu-user-static” in JetPack. This could be found in the ~/.nvsdkm/dist/sdkml3_jetpack_l4t_43_ga.json. We retested and confirmed that in a clean environment, there is no error when installing JetPack 4.3 with SDKM.

Is package “binfmt-support” installed by defaultfor JetPack/SDKM? I ask because there may be some configuration dependent on this (and install order dependent upon this being present before running QEMU…not sure). If you look here, then you’ll see some “/etc” config files are present. Presumably this is to list which binfmt are available, and where, for support of QEMU:
https://packages.ubuntu.com/xenial/amd64/binfmt-support/filelist

Hi linuxdev,

No, SDKM doesn’t install “binfmt-support”.
In the retesting, we used “apt purge” command to remove both “binfmt-support” and “qemu-user-static” packages
first, then did a full reinstall. And everything goes well.

I am wondering what happens if the owner of the host PC already had qemu-user-static, but the configuration did not originally have aarch64? I think the issue may have hit only a few people who originally had qemu-user-static with some other non-aarch64 architecture. I am guessing that on your test a complete purge would always work, but on an end user’s host PC, SDKM would not do a purge first…in which case a previous non-aarch64 use may have unexpected consequences (I am not entirely positive about how binfmt-support is intended to work, but it seems to only generate metadata for an architecture, and require manual run at times to update for a new architecture if there was a previous configuration for a different architecture). I just don’t know enough about QEMU to really say.

I am not an expert on that matter too, but qemu is an emulator that can be used alone to create a virtual machine. If one wants it to be invoked automatically by the kernel when trying to execute a foreign binary executable, the kernel must be instructed to do that by configuring it at run-time. That’s the job of the binfmt-support package. But binfmt-support is not used only for qemu executables; it is also used as a generalisation of the ‘#!/bin/sh’ convention to invoke automatically the python or java interpreter. What I saw is that a specific command of the binfmt-support package must be run when qemu-static-user is already installed or installing itself to populate the file that will configure the kernel. The details are above. And as @linuxdev wrote, normal users don’t purge their PC’s before installing sdkmanager. Also, did you test on ubuntu-16.04 ? and with qemu-user-static installed without binfmt-support ?