How to build Jetpack 4.3 docker image?

Hi all,

I was wondering that if it possible to create a base docker image that includes Jetpack 4.3 resources. There is a working solution to apply with Jetpack 4.1.1 here:

However, those instructions doesn’t apply for the new version of Jetpack since it throws mount: /proc: permission denied when building the image. I know there are a lot of differences between JP 4.1.1 and JP 4.3 resources such as driver packages, prereqs etc. but here is my Dockerfile:

# jetson-agx/opengl:jetpack-$BUILD_VERSION-bionic

LABEL maintainer "doruk898"

# args

# setup environment variables
ENV container docker

# set the locale
    LANG=C.UTF-8 \

# install packages
RUN apt-get update \
    && apt-get install -q -y \
    dirmngr \
    gnupg2 \
    lsb-release \
    && rm -rf /var/lib/apt/lists/*

# setup sources.list
RUN echo "deb-src $(lsb_release -cs) main restricted \n\
deb-src $(lsb_release -cs)-updates main restricted \n\
deb-src $(lsb_release -cs)-backports main restricted universe multiverse \n\
deb-src $(lsb_release -cs)-security main restricted" \
    > /etc/apt/sources.list.d/official-source-repositories.list

# install build tools
RUN apt-get update \
    && DEBIAN_FRONTEND=noninteractive TERM=linux apt-get install --no-install-recommends -q -y \
    apt-transport-https \
    apt-utils \
    bash-completion \
    build-essential \
    ca-certificates \
    clang \
    clang-format \
    cmake \
    cmake-curses-gui \
    curl \
    gconf2 \
    gconf-service \
    gdb \
    git-core \
    git-gui \
    gvfs-bin \
    inetutils-ping \
    llvm \
    llvm-dev \
    nano \
    net-tools \
    pkg-config \
    shared-mime-info \
    software-properties-common \
    sudo \
    tzdata \
    unzip \
    wget \
    qemu-user-static \
    && apt-get autoremove -y \
    && rm -rf /var/lib/apt/lists/*

# download and install nvidia jetson xavier driver package
RUN if [ "$BUILD_VERSION" = "4.3"   ]; then \
      echo "downloading jetpack-$BUILD_VERSION" ; \
      wget -qO- | \
      tar -xvj -C /tmp/ ; \
      wget -qO- | \
      tar -xpf -C /tmp/Linux_for_Tegra/rootfs/ ; \
      cd /tmp/Linux_for_Tegra ; \
    elif [ "$BUILD_VERSION" = "4.1.1" ]; then \
      echo "downloading jetpack-$BUILD_VERSION" ; \
	  wget -qO- | \
      tar -xvj -C /tmp/ ; \
      cd /tmp/Linux_for_Tegra ; \
      # fix error in tar command when extracting configuration files, by overwriting existing configuration files \
      #sed -i -e 's@tar xpfm ${LDK_NV_TEGRA_DIR}/config.tbz2@tar --overwrite -xpmf ${LDK_NV_TEGRA_DIR}/config.tbz2@g' ; \
    else \
      echo "error: please specify jetpack version in" \
      exit 1 ;\
    fi \
    && sed -i '168,187 s/^/#/' /tmp/Linux_for_Tegra/nv_tegra/ \
    && sed -i '195 s/^/#/' /tmp/Linux_for_Tegra/nv_tegra/ \
    && sed -i '197 s/^/#/' /tmp/Linux_for_Tegra/nv_tegra/ \
    && rm /tmp/Linux_for_Tegra/nv_tegra/l4t_deb_packages/nvidia-l4t-x11_32.3.1-20191209230245_arm64.deb /tmp/Linux_for_Tegra/nv_tegra/l4t_deb_packages/nvidia-l4t-graphics-demos_32.3.1-20191209230245_arm64.deb \
    #&& sed -i '195i LC_ALL=C proot . mount -t proc none /proc' /tmp/Linux_for_Tegra/nv_tegra/ \
    && ./ -r /tmp/Linux_for_Tegra/rootfs/ \
    && mkdir -p /tmp/Linux_for_Tegra/rootfs/opt/nvidia/dep_repos \
    # fix erroneous entry in /etc/ \
    #&& echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ \
    # add missing /usr/lib/aarch64-linux-gnu/tegra/ \
    #&& echo "/usr/lib/aarch64-linux-gnu/tegra" > /usr/lib/aarch64-linux-gnu/tegra/ \
    #&& update-alternatives --install /etc/ aarch64-linux-gnu_gl_conf /usr/lib/aarch64-linux-gnu/tegra/ 1000 \
    # fix erroneous entry in /usr/lib/aarch64-linux-gnu/tegra-egl/ \
    #&& echo "/usr/lib/aarch64-linux-gnu/tegra-egl" > /usr/lib/aarch64-linux-gnu/tegra-egl/ \
    #&& update-alternatives --install /etc/ aarch64-linux-gnu_egl_conf /usr/lib/aarch64-linux-gnu/tegra-egl/ 1000 \
    && rm -Rf /tmp/Linux_for_Tegra

# install packages
RUN apt-get update \
    && DEBIAN_FRONTEND=noninteractive TERM=linux apt-get install --no-install-recommends -q -y \
    mesa-utils \
    && apt-get autoremove -y \
    && rm -rf /var/lib/apt/lists/*

# create user
RUN adduser $USER --uid $UID --disabled-password --gecos "" \
    && usermod -aG audio,video $USER \
    && echo "$USER ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

# switch to non-root user

# labels
LABEL org.label-schema.schema-version="1.0"
LABEL org.label-schema.description="NVIDIA Jetson AGX JetPack-$BUILD_VERSION OpenGL - Ubuntu-18.04."
LABEL org.label-schema.version=$BUILD_VERSION
LABEL org.label-schema.docker.cmd="xhost +local:root \
docker run -it \
  --device /dev/nvhost-as-gpu \
  --device /dev/nvhost-ctrl \
  --device /dev/nvhost-ctrl-gpu \
  --device /dev/nvhost-ctxsw-gpu \
  --device /dev/nvhost-dbg-gpu \
  --device /dev/nvhost-gpu \
  --device /dev/nvhost-prof-gpu \
  --device /dev/nvhost-sched-gpu \
  --device /dev/nvhost-tsg-gpu \
  --device /dev/nvmap \
  --device /dev/snd \
  --net=host \
  -e DISPLAY \
  -e PULSE_SERVER=tcp:$HOST_IP:4713 \
  -e PULSE_COOKIE_DATA=`pax11publish -d | grep --color=never -Po '(?<=^Cookie: ).*'` \
  -e QT_X11_NO_MITSHM=1 \
  -v /dev/shm:/dev/shm \
  -v /etc/localtime:/etc/localtime:ro \
  -v /tmp/.X11-unix:/tmp/.X11-unix \
  -v /usr/local/cuda/lib64:/usr/local/cuda/lib64 \
  -v /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:ro \
  -v ${XDG_RUNTIME_DIR}/pulse/native:/run/user/1000/pulse/native \
  -v ~/mount/backup:/backup \
  -v ~/mount/data:/data \
  -v ~/mount/project:/project \
  -v ~/mount/tool:/tool \
  --rm \
  --name jetson-agx-opengl-jetpack-$BUILD_VERSION-bionic \
  jetson-agx/opengl:jetpack-$BUILD_VERSION-bionic \
xhost -local:root"

# set the working directory

# update .bashrc
RUN echo \
'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/lib/aarch64-linux-gnu/tegra:/usr/lib/aarch64-linux-gnu/tegra-egl:/usr/lib/aarch64-linux-gnu:/usr/local/lib:$LD_LIBRARY_PATH\n\
export NO_AT_BRIDGE=1\n\
export PATH=/usr/local/cuda/bin:$PATH\n\
export PS1="${debian_chroot:+($debian_chroot)}\u:\W\$ "' \
    >> $HOME/.bashrc

CMD ["bash"]

The error I get when building this Dockerfile is: gpg: can’t connect to the agent: IPC connect call failed which comes from Linux_for_Tegra/nv_tegra/

The question is, is there any way to bypass some steps that SDK manager handles and build an isolated base Jetpack 4.3 docker image?

The reason for that particular failure is because you can’t use mount inside Docker (and you can’t -v at the build stage to get around that). Proot won’t get you around that either. procfs being mountable is required for certain scripts you seem to be using like which use a chroot to edit the rootfs. These three lines are your problem in which installs .deb files on a rootfs:

LC_ALL=C chroot . mount -t proc none /proc
LC_ALL=C APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=1 chroot . apt-key add "/etc/apt/jetson-ota-public.key"
umount ${L4T_ROOTFS_DIR}/proc

Is it possible to build an image with SDKM installed? Yes. Is it possible to from there make images that you can flash using Maybe, but I can’t think of any without editing nvidia’s scripts in a way other than what you’ve done.

Note: Instead of using this command a keyring should be placed directly in the
           /etc/apt/trusted.gpg.d/ directory with a descriptive name and either "gpg" or "asc" as
           file extension.

Might give you a clue towards getting around it (from apt-key manual).

If you edit out those three lines from and instead copy or move jetson-ota-public.key with the recommended .asc extension directly to rootfs/etc/apt/trusted.gpg.d/ as suggested, it may get you further. Make sure to change permissions so it’s owned by root and not world writable, (which nearly everything installed by SDKM under Linux_for_Tegra is by default). permissions should be:

-rw-r--r-- 1 (644) root root

to nvidia: is there a keyserver(s) out there the ota public key can be grabbed from directly?


Are you finding the base container for JetPack4.3?
If yes, you can find it here:



Sorry for the late reply and thanks for your explanation. I will try and post back with the results.

So, actually, I take back my statement that ‘proot’ won’t get you around it. I was able to figure out a way to install the apt key, update, and install software entirely in userspace, so this should work in even rootless docker (or podman, or even inside something like Termux on a non-rooted android phone).

(all of these steps from within Linux_for_Tegra unless otherwise specified)

Step 1. Move rootfs folder aside and extract your rootfs like Nvidia instructions, but without sudo:

mv rootfs rootfs.bak
mkdir rootfs
tar -jxpf ../Tegra_Linux_Sample-Root-Filesystem_R32.2.3_aarch64.tbz2
cd ..

Step 2: Copy the ota apt key into the rootfs (really this should come from a keyserver)

cp nv_tegra/jetson-ota-public.key rootfs/etc/apt/trusted.gpg.d/jetson-ota-public.<b>asc</b>

Step 3: create an apt-sources file

touch rootfs/etc/apt/sources.list.d/nvidia-l4t-apt-source.list

Step 4: using a text editor, enter this into the apt sources file:

deb r32 main
deb<b>t194</b> r32 main

Step 5: Now, for the magic part, enter the proot and do an apt update

$ proot -S ./rootfs/ -q qemu-aarch64-static -w /
# apt update
[b]Hit:1 r32 InRelease
Hit:2 r32 InRelease[/b]
Hit:3 bionic InRelease
Get:4 bionic-updates InRelease [88.7 kB]
Get:5 bionic-backports InRelease [74.6 kB]
Get:6 bionic-security InRelease [88.7 kB]
Get:7 bionic-updates/main arm64 Packages [613 kB]                                                               
Get:8 bionic-updates/main arm64 DEP-11 Metadata [290 kB]                                                        
Get:9 bionic-updates/main DEP-11 48x48 Icons [73.8 kB]                                                          
Get:10 bionic-updates/main DEP-11 64x64 Icons [143 kB]                                                          
Get:11 bionic-updates/restricted arm64 Packages [1,088 B]                                                       
Get:12 bionic-updates/universe arm64 Packages [935 kB]                                                          
Get:13 bionic-updates/universe Translation-en [323 kB]                                                          
Get:14 bionic-updates/universe arm64 DEP-11 Metadata [259 kB]                                                   
Get:15 bionic-updates/universe DEP-11 48x48 Icons [198 kB]                                                      
Get:16 bionic-updates/universe DEP-11 64x64 Icons [444 kB]                                                      
Get:17 bionic-backports/universe arm64 DEP-11 Metadata [7,960 B]                                                
Get:18 bionic-security/main arm64 DEP-11 Metadata [32.5 kB]                                                     
Get:19 bionic-security/main DEP-11 48x48 Icons [17.6 kB]                                                        
Get:20 bionic-security/main DEP-11 64x64 Icons [41.5 kB]                                                        
Get:21 bionic-security/universe arm64 DEP-11 Metadata [37.0 kB]                                                 
Get:22 bionic-security/universe DEP-11 48x48 Icons [16.4 kB]                                                    
Get:23 bionic-security/universe DEP-11 64x64 Icons [116 kB]                                                     
Fetched 3,801 kB in 13s (282 kB/s)                                                                                                                   
Reading package lists... Done
Building dependency tree       
Reading state information... Done
All packages are up to date.

edit: fixed SOC in sources list (i forgot I was posting in the Xavier forum, no the Nano forum). Xavier is t194, Nano is t210. a full list is in rootfs/etc/systemd/

edit: bind mounting apt/dpkg themselves inside doesn’t work unfortunately, so it’ll remain slower, but according to the proot manual, things like compilers and interpreters should work, and examples are provided.

If you install some nvidia packages (eg. DeepStream) and get some errors like “ERROR: Could not find Please install Nvidia Drivers from JetPack.”, you may need to follow some of the steps from as your user (not as root), like extract some tarballs. For example (these lines are mostly snipped out of apply_binaries)

cd rootfs
tar -I lbzip2 -xpmf ../nv_tegra/nvidia_drivers.tbz2
tar -I lbzip2 -xpmf ../nv_tegra/nv_tools.tbz2
tar -I lbzip2 -xpmf ../nv_tegra/nv_sample_apps/nvgstapps.tbz2
tar -I lbzip2 -xpmf ../nv_tegra/weston.tbz2
tar -I lbzip2 -xpmf ../nv_tegra/config.tbz2
tar -I lbzip2 -xpmf ../nv_tegra/graphics_demos.tbz2
tar -I lbzip2 -xpmf ../kernel/kernel_supplements.tbz2
(kernel headers need different treatment, see

then reenter the proot, and run

apt --fix-broken install

or if that has errors, reinstall the package entirely with

apt install --reinstall some-package-1.0

I’m currently working on getting the whole process, from beginning to end as a regular user so root is not required to build an image, much less the ability to mount. I was able to apply the kernel, modules, ota updates, do a full system upgrade, and install DeepStream, all without a single ‘sudo’. This may give you some clues on how to make a container that spits out viable images for flashing.

So. Some good progress. I patched and some other scripts to not require root, and they mostly work. Some bugs still, but coming along.

tegrity@b6a0037f0f2a:/Linux_for_Tegra$ ./ 
Using rootfs directory of: /Linux_for_Tegra/rootfs
Root file system directory is /Linux_for_Tegra/rootfs
Copying public debian packages to rootfs
Start L4T BSP package installation
Installing Jetson OTA server key in rootfs
/Linux_for_Tegra/rootfs /Linux_for_Tegra
Installing BSP Debian packages in /Linux_for_Tegra/rootfs
Selecting previously unselected package nvidia-l4t-ccp-t210ref.
(Reading database ... 118959 files and directories currently installed.)
Preparing to unpack .../nvidia-l4t-ccp-t210ref_32.3.1-20191209225816_arm64.deb ...
Pre-installing... skip compatibility checking.
Unpacking nvidia-l4t-ccp-t210ref (32.3.1-20191209225816) ...
Setting up nvidia-l4t-ccp-t210ref (32.3.1-20191209225816) ...
Selecting previously unselected package jetson-gpio-common.
(Reading database ... 118963 files and directories currently installed.)
Preparing to unpack .../jetson-gpio-common_2.0.4_arm64.deb ...
Unpacking jetson-gpio-common (2.0.4) ...
Selecting previously unselected package python-jetson-gpio.
Preparing to unpack .../python-jetson-gpio_2.0.4_arm64.deb ...
Unpacking python-jetson-gpio (2.0.4) ...
Selecting previously unselected package python3-jetson-gpio.
Preparing to unpack .../python3-jetson-gpio_2.0.4_arm64.deb ...
Unpacking python3-jetson-gpio (2.0.4) ...
Selecting previously unselected package nvidia-l4t-3d-core.
Preparing to unpack .../nvidia-l4t-3d-core_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-3d-core (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-apt-source.
Preparing to unpack .../nvidia-l4t-apt-source_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-apt-source (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-camera.
Preparing to unpack .../nvidia-l4t-camera_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-camera (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-configs.
Preparing to unpack .../nvidia-l4t-configs_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-configs (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-core.
Preparing to unpack .../nvidia-l4t-core_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-core (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-cuda.
Preparing to unpack .../nvidia-l4t-cuda_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-cuda (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-firmware.
Preparing to unpack .../nvidia-l4t-firmware_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-firmware (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-graphics-demos.
Preparing to unpack .../nvidia-l4t-graphics-demos_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-graphics-demos (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-gstreamer.
Preparing to unpack .../nvidia-l4t-gstreamer_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-gstreamer (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-init.
Preparing to unpack .../nvidia-l4t-init_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-init (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-initrd.
Preparing to unpack .../nvidia-l4t-initrd_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-initrd (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-jetson-io.
Preparing to unpack .../nvidia-l4t-jetson-io_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-jetson-io (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-multimedia-utils.
Preparing to unpack .../nvidia-l4t-multimedia-utils_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-multimedia-utils (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-multimedia.
Preparing to unpack .../nvidia-l4t-multimedia_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-multimedia (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-oem-config.
Preparing to unpack .../nvidia-l4t-oem-config_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-oem-config (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-tools.
Preparing to unpack .../nvidia-l4t-tools_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-tools (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-wayland.
Preparing to unpack .../nvidia-l4t-wayland_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-wayland (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-weston.
Preparing to unpack .../nvidia-l4t-weston_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-weston (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-x11.
Preparing to unpack .../nvidia-l4t-x11_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-x11 (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-xusb-firmware.
Preparing to unpack .../nvidia-l4t-xusb-firmware_32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-xusb-firmware (32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-kernel-dtbs.
Preparing to unpack .../nvidia-l4t-kernel-dtbs_4.9.140-tegra-32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-kernel-dtbs (4.9.140-tegra-32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-kernel-headers.
Preparing to unpack .../nvidia-l4t-kernel-headers_4.9.140-tegra-32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-kernel-headers (4.9.140-tegra-32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-kernel.
Preparing to unpack .../nvidia-l4t-kernel_4.9.140-tegra-32.3.1-20191209225816_arm64.deb ...
Unpacking nvidia-l4t-kernel (4.9.140-tegra-32.3.1-20191209225816) ...
Selecting previously unselected package nvidia-l4t-bootloader.
Preparing to unpack .../nvidia-l4t-bootloader_32.3.1-20191209225816_arm64.deb ...
Pre-installing bootloader package, skip flashing
Unpacking nvidia-l4t-bootloader (32.3.1-20191209225816) ...
Setting up jetson-gpio-common (2.0.4) ...
Setting up python-jetson-gpio (2.0.4) ...
Error while loading /var/lib/dpkg/info/python-jetson-gpio.postinst: Exec format error
dpkg: error processing package python-jetson-gpio (--install):
 installed python-jetson-gpio package post-installation script subprocess returned error exit status 1
Setting up python3-jetson-gpio (2.0.4) ...
Error while loading /var/lib/dpkg/info/python3-jetson-gpio.postinst: Exec format error
dpkg: error processing package python3-jetson-gpio (--install):
 installed python3-jetson-gpio package post-installation script subprocess returned error exit status 1
Setting up nvidia-l4t-apt-source (32.3.1-20191209225816) ...
Pre-installing... skip changing source list.
Setting up nvidia-l4t-configs (32.3.1-20191209225816) ...
Setting up nvidia-l4t-core (32.3.1-20191209225816) ...
Setting up nvidia-l4t-firmware (32.3.1-20191209225816) ...
Setting up nvidia-l4t-init (32.3.1-20191209225816) ...
/var/lib/dpkg/info/nvidia-l4t-init.postinst: line 30: /etc/hosts: Permission denied
Setting up nvidia-l4t-multimedia-utils (32.3.1-20191209225816) ...
Setting up nvidia-l4t-oem-config (32.3.1-20191209225816) ...
Setting up nvidia-l4t-tools (32.3.1-20191209225816) ...
Setting up nvidia-l4t-wayland (32.3.1-20191209225816) ...
Setting up nvidia-l4t-weston (32.3.1-20191209225816) ...
Setting up nvidia-l4t-x11 (32.3.1-20191209225816) ...
Setting up nvidia-l4t-xusb-firmware (32.3.1-20191209225816) ...
Pre-installing xusb firmware package, skip flashing
Setting up nvidia-l4t-kernel (4.9.140-tegra-32.3.1-20191209225816) ...
Setting up nvidia-l4t-bootloader (32.3.1-20191209225816) ...
Pre-installing bootloader package, skip flashing
Setting up nvidia-l4t-3d-core (32.3.1-20191209225816) ...
Setting up nvidia-l4t-cuda (32.3.1-20191209225816) ...
Setting up nvidia-l4t-graphics-demos (32.3.1-20191209225816) ...
Setting up nvidia-l4t-initrd (32.3.1-20191209225816) ...
Setting up nvidia-l4t-jetson-io (32.3.1-20191209225816) ...
Setting up nvidia-l4t-multimedia (32.3.1-20191209225816) ...
Setting up nvidia-l4t-kernel-dtbs (4.9.140-tegra-32.3.1-20191209225816) ...
ls: cannot access '*.dtb': No such file or directory
Setting up nvidia-l4t-kernel-headers (4.9.140-tegra-32.3.1-20191209225816) ...
Unpacking kernel headers...
Setting up nvidia-l4t-camera (32.3.1-20191209225816) ...
Setting up nvidia-l4t-gstreamer (32.3.1-20191209225816) ...
Processing triggers for nvidia-l4t-kernel (4.9.140-tegra-32.3.1-20191209225816) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Errors were encountered while processing:

The Dockerfile downloads the BSP, extracts it, patches it, switches user, downloads and extracts the rootfs, installs the ota key, applies the binaries… most of them anwyay. I have to look at why the GPIO package is failing. I’m definately in a proot and “arch” reports “aarch64” so :/

root@b6a0037f0f2a:/# arch
root@b6a0037f0f2a:/# apt --fix-broken install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 258 not upgraded.
2 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Setting up python3-jetson-gpio (2.0.4) ...
Error while loading /var/lib/dpkg/info/python3-jetson-gpio.postinst: Exec format error
dpkg: error processing package python3-jetson-gpio (--configure):
 installed python3-jetson-gpio package post-installation script subprocess returned error exit status 1
Setting up python-jetson-gpio (2.0.4) ...
Error while loading /var/lib/dpkg/info/python-jetson-gpio.postinst: Exec format error
dpkg: error processing package python-jetson-gpio (--configure):
 installed python-jetson-gpio package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
E: Sub-process /usr/bin/dpkg returned an error code (1)

edit: yeah, there is something in the post-installation script that’s not detecting the arch correctly (or rather, it is, but it shouldn’t)

So, I did patch apply_binaries and some others to work in user mode entirely. I am 80% certain it’s possible to do the entire build without root (basically everything but the flash itself). I’m working on something else, atm, but if anybody wants to pick my brain on it or see my current patches, lmk.

I’m pretty sure this post is what I am trying to accomplish, but since it is a month old, I’d figure I’d check if there are any updates. As a board level hardware engineer, I’m sort of out of my lane with docker, but I know I need jetpack 4.3 on the image and after all the forum posts I’ve searched, I think you are the only one publicly showing your progress.

Have you been able to produce a 32.3.1 docker image with 4.3 fully installed? I cant seem to get Deepstream on my image because I need Cuda 10+ first and I cant manage that either.

Please update if you have progress. Your work is helping me!


I will try a few different approaches tomorrow and post back with my results here. Thanks for your interest.

Sorry for no updates. I’ve been busy over the past month or so on a DeepStrem project for an expo I was invited to. Unfortunately that’s cancelled due to the Coronavirus so I have some more time now. I have put some unpublished work in, however, which i’ll share.

I decided installing SDK Manager in Docker is too ugly to do cleanly since it requires X even in cli mode, so I gave up on that. Instead, there is a BSP I am downloading, extracting, and patching. The BSP has the Linux_for_Tegra folder in it, and with the OTA updates now avaiable, SDK manager should no longer be required at all. That’s my end goal here.

I’ve also patched a bunch of nvidia scripts so that they no longer require root, or mount, or any privileges at all to run. I’m actually testing in rootless Docker. There are a few problems I anticpate when building the image (making the fs image itself) since that usually requires the ability to loopback mount, however newer versions of mke2fs support making a fs image directly from a folder, as well as uid/gid mapping, which I plan on using in and friends to generate the final image 100% in user mode. I’m not absolutely certain it’s possible, but the more I work on it the more it seems like it may be.

I rushed some commit messages and published what i’ve done on my dev branch. I can’t guarantee everything works so far (image building won’t), but I just build and ran the image in rootless docker and you can tegrity-qemu --proot --enter rootfs to enter a rootfs as a fake root user, run apt update, install Nvidia software from the online apt repos, etc. There is one package that doesn’t install (jetson-gpio), so that’s temporarily disabled, but everything else seems to install. Of course once the board is up, you can install jetson-gpio from the apt repos.

Next step in this project is to mke2fs from the rootfs folder into a rootfs partition image that can be flashed. That’s mostly just a matter of reading a bunch of documentation. I’ve skimmed it and it seems doable with Ubuntu 18.04+, but I haven’t actually run any tests yet. Worst comes to worst, you can generate a rootfs tarball from the rootfs folder and plop that out in a shared volume.