Adding "https://repo.download.nvidia.com/jetson" repositories to docker image

Hello guys,

I am making a docker image on my jetson nano to isolate some package installations from my host installation, I need to install some packages in the docker image (based on nvcr.io/nvidia/l4t-base:r32.3.1 image) and I tried to add repositories available in my host installation (Jetpack 4.3) in my L4T docker image, which are:
https://repo.download.nvidia.com/jetson/common r32 main”
and
https://repo.download.nvidia.com/jetson/t210 r32 main”
I failed to do an apt-get update as gpg keys for these repositories were not added to my apt, because I could not find where to get it from.

The I tried to directly install packages named nvidia-l4t-apt-source and nvidia-l4t-ccp-t210ref downloaded directly using my host [I had also had to add `TEGRA_CHIPID 0x21` to /etc/nv_boot_control.conf in my docker]. As I read here that nvidia-l4t-apt-source configures the repositories according to the architecture of your board, and it depends on nvidia-l4t-ccp-t210ref in my jetson Nano. The problem is, while installing nvidia-l4t-ccp-t210ref with dpkg, I get this error message:
L4T Debian install is not supported on your configuration You should install an L4T image from the latest releases - starting from r32.3+ to have Debian package support
I extracted the debian package itself and saw these lines in the DEBIAN/preinst script:

#Don't allow installing package on a system which doesn't have it pre-installed

case "$1" in install) echo "L4T Debian install is not supported on your configuration" echo "You should install an L4T image from the latest releases - starting" echo "from r32.3+ to have Debian package support" exit 1 ;; *) ;; esac
which clearly says you are doing it wrong if you are installing this package!! It has to be pre-installed (sorry that I could not format this right).

My question is, WHY?!
Then how can I have these repositories inside my l4t docker image?
I am confused. Any help will be appreciated.

------------------ UPDATE: ------------------
I also have seen that installing these packages is exactly equivalent to making the /etc/apt/sources.list.d/nvidia-l4t-apt-source.list with the relevant content, for jetson Nano it is:

deb https://repo.download.nvidia.com/jetson/common r32 main
deb https://repo.download.nvidia.com/jetson/t210 r32 main

and again with running apt-get update in the docker, the missing GPG causes error:

W: GPG error: https://repo.download.nvidia.com/jetson/common r32 InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 0D296FFB880FB004
E: The repository ‘https://repo.download.nvidia.com/jetson/common r32 InRelease’ is not signed.
N: Updating from such a repository can’t be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
W: GPG error: https://repo.download.nvidia.com/jetson/t210 r32 InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 0D296FFB880FB004
E: The repository ‘https://repo.download.nvidia.com/jetson/t210 r32 InRelease’ is not signed.
N: Updating from such a repository can’t be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Seems the only solution is to know where to get the GPG key from and add it to the apt.

------------------ ANOTHER UPDATE: ------------------
I can export the keys on my host via apt-key exportall > trusted-keys, copy that into my docker image and import it with apt-key add trusted-keys, it works perfectly, but still there is a question how I can get the original key in my docker image, e.g. without copying exported keys from my host. This method will break when the key changes,

Hi,

It looks like you are facing a similar issue of topic 112885
Could you check the suggestion of this topic first?

Thanks.

Thanks, and no, I have seen that topic and can not see how it is related. Remember I am making a docker image which needs to have these repositories inside it, I am not flashing anything. I am running jetson with the Jetpack sdcard image provided by nvidia, and inside it I have a docker image based on nvidia l4t image, which I want to customize, by adding these repositories to /etc/apt/sources.list.d/nvidia-l4t-apt-source.list as the original sdcard image has them.
I can manually edit this package and ignore the check it performs so it can be installed, my question is why the developer has assumed that this should not be done in this way, and as another question, where I can find the GPG key for these repositories.

I also have seen that installing these packages is exactly equivalent to making the /etc/apt/sources.list.d/nvidia-l4t-apt-source.list with the relevant content, for jetson Nano it is:

deb https://repo.download.nvidia.com/jetson/common r32 main
deb https://repo.download.nvidia.com/jetson/t210 r32 main

and again with running apt-get update in the docker, the missing GPG causes error:

W: GPG error: https://repo.download.nvidia.com/jetson/common r32 InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 0D296FFB880FB004
E: The repository ‘https://repo.download.nvidia.com/jetson/common r32 InRelease’ is not signed.
N: Updating from such a repository can’t be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
W: GPG error: https://repo.download.nvidia.com/jetson/t210 r32 InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 0D296FFB880FB004
E: The repository ‘https://repo.download.nvidia.com/jetson/t210 r32 InRelease’ is not signed.
N: Updating from such a repository can’t be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Seems the only solution is to know where to get the GPG key from and add it to the apt.

Here’s how you can do it, as well as pre-built images for every Tegra board and every JetPack version since 4.3.
https://hub.docker.com/r/mdegans/l4t-base

The Dockerfiles are linked to off the readme on that page. You want to check out the l4t-base branch for the ones that work.

1 Like

Regarding the GPG key, I agree Nvidia should absolutely have a server for that. It’s hidden in the BSP tarball, so my solution grabs it from inside there, and it’s a two-stage build so it only ends up keeping the key and the apt lists.

I also can’t answer the question of why the apt sources aren’t automatically enabled. They’re also post-install scripts that break if you don’t use an official L4T image as a base, so you can’t use Ubuntu stock, even if you install the sources and the apt key.

1 Like

Also note that you’ll still have to use --runtime Nvidia to use the GPU and it’ll bind mount most libraries so it may make the most sense to just use the repos to install missing build dependencies at build time.

I’d recommend installing the build dependencies, building the thing, and purging the dependencies, along with the apt lists, all in the same layer, for best results. It doesn’t make sense to leave anything around since It’s just going to be overwritten by the Nvidia runtime.

1 Like

Great! Thank you for your answer, that is the trick I was looking for.

Oh I see, I had no idea what --runtime nvidia does … Thanks for pointing it out.

I have been told the behavior will change in a future release, but yeah, it’s the current behavior. There are a bunch of .csv files with lists of files to bind mount in a .d folder somewhere under /etc/nvidia-container-runtime/…

You can add your own .csv files in there if you want to bind mount more (or less) with the --runtime nvidia option. As on other platforms, I recommend not building or running containers as root if you can avoid it at all. Drop to a user as soon as you can if you can is my advice, fwiw. You only need to be in the video group to access the GPU.

Hey guys,
i am also facing the same issue as yours gpg error.
if you solved this then can you give me some hint how you bypass this error
W: GPG error: https://repo.download.nvidia.com/jetson/common r32 InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 0D296FFB880FB004
E: The repository ‘https://repo.download.nvidia.com/jetson/common r32 InRelease’ is not signed.
N: Updating from such a repository can’t be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
W: GPG error: https://repo.download.nvidia.com/jetson/t210 r32 InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 0D296FFB880FB004
E: The repository ‘https://repo.download.nvidia.com/jetson/t210 r32 InRelease’ is not signed.
N: Updating from such a repository can’t be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

@kumarhxkd

Adding the repo isn’t enough. You need to, in addition, add the apt key. Apt packages are signed so for new sources you need to install key(s) via one of these methods:

ADD --chown root:root https://repo.download.nvidia.com/jetson/jetson-ota-public.asc /etc/apt/trusted.gpg.d/jetson-ota-public.asc
RUN chmod 644 /etc/apt/trusted.gpg.d/jetson-ota-public.asc
(... apt-get stanza here)

or

RUN apt-key adv --fetch-key https://repo.download.nvidia.com/jetson/jetson-ota-public.asc
(... apt-get stanza here)

You’ll also want to make sure a ca-certificates package is installed and updated before running the initial apt-update since the repos are https and without certs, apt will refuse to use such sources. Arguably, https in this case is pointless, but I digress. They can be left in the image but should be updated in any derived images since certs can be revoked. A full Dockerfile would look something like this:

# MIT License

# Copyright (c) 2020 Michael de Gans

# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:

# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.

ARG JETPACK_VERSION=r32.4.4
ARG BASE_IMAGE=nvcr.io/nvidia/l4t-base:${JETPACK_VERSION}

FROM ${BASE_IMAGE} as base

ARG SOC="t210"

ADD --chown=root:root https://repo.download.nvidia.com/jetson/jetson-ota-public.asc /etc/apt/trusted.gpg.d/jetson-ota-public.asc
RUN chmod 644 /etc/apt/trusted.gpg.d/jetson-ota-public.asc \
    && apt-get update && apt-get install -y --no-install-recommends \
        ca-certificates \
    && echo "deb https://repo.download.nvidia.com/jetson/common r32 main" > /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
    && echo "deb https://repo.download.nvidia.com/jetson/${SOC} r32 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
    && apt-get update \
    && rm -rf /var/lib/apt/lists/*
# the last two lines are just to test it works. Leaving the ca-certificates
# package in is intentional, since nvidia uses https sources and without that,
# apt will complain about "Certificate verification failed: The certificate is
# NOT trusted. The certificate issuer is unknown.  Could not handshake: Error
# in the certificate verification. [IP: 23.221.236.160 443]"
# You will probably want to update ca-certificates in each apt stanza in each
# derived image since certificates can be revoked periodically and that package
# should always be up to date.

Notes:

  • Each statement in a Dockerfile (eg. RUN) is it’s own copy on write layer, meaning that if you add something in one layer, it will take up space even if you delete it in a subsequent layer.
  • In order to build $THING that needs cuda, you can either make the nvidia runtime the default, which has consequences for every image built on the system, includin non-nvidia ones, or you can do something like this:
RUN apt-get update && apt-get install -y --no-install-recommends \
        cuda-foo-10.2 \
        other-build-dependency \
        other-runtime-dependency \
    && build-the-thing.sh \
    && apt-get purge --autoremove -y \
        cuda-foo-10.2 \
        other-build-dependency \
    && rm -rf /var/lib/apt/lists/*
  • Basically, anything that is bind mounted over anyway with --runtime nvidia is safe to remove (it’ll be bind mounted over anyway), so if you install cuda, you can remove cuda when you’re done with it and slim down your image. It’ll still be available at runtime until this approach changes towards a purely containeried approach in which you’ll need to leave nvidia runtime dependencies in the image.

Hope that helps, and Happy New Year all!

Hey @mdegans thanks i will try it and let you know.
Also can you tell me you are able to make docker inage from ubuntu as scratch.
Without using nvidia as base image.
I am not very much familiar with apt so i am getting too much difficulty to install everything in Linux.
My final goal is to use gstreamer with nvidia gpu.
So can you give me some hints about installation…

@kumarhxkd

IIRC, the most recent l4t-base images derived FROM ubuntu:bionicso there is no need to roll your own. Just use l4t-base in place of ubuntu:bionic and it should usually just work.

You can attempt it above as well by changing the $BASE_IMAGE argument in the to ubuntu:bionic but the last time I tried this, not recently, some scripts in Nvidia packages would fail because of some pre installation checks they run.

Debian packages have scripts that run at various points in the install process. In this case I ran into a script that checks for a valid l4t filesystem, so there are probably some files you must add after ubuntu:bionic in order for some Nvidia packages to install. Tracking down what files you might need to add in order to pass these checks can made easier with docker history on l4t-base.

It’s probably much easier to just use l4t-base for any GPU containers.

hey @mdegans, can we install l4t on a server like a GCP virtual machine, or it’s just for jatson.
I want to build a pipeline where GStreamer takes input a video file into GPU for better inference speed then I can use my model.

@kumarhxkd

L4T is only 4 Tegra, but DeepStream is available for x86 as well. If you write something using DeepStream you can use it with any Nvidia hardware*.

*But if you use it in the datacenter you cannot use GeForce according to the EULA. You must in this case use Tesla.

Hey, @mdegans thanks for your support. I will try it and let you know.