I was trying to build some containers based on these images. First, there is no image for l4t-base:36.4. Would be great to have one. Second, I tried to use l4t-base:36.2.0, but realized that apt install nvidia-tensorrt will install Tensorrt 8.6.3 not TensorRT 10.3.0. Furthermore, there is no image for tensorflow beyond 6.1 here: Index of /compute/redist/jp. Looks like cuda and tensorrt hasn’t changed since 6.1 but maybe good to add a note somewhere that on later platforms you are supposed to use the earlier image. Last but not least, nvidia-cuda is broken for that image. Log proofing all of this below.
Is there any way to fix all of this?
Thanks.
host:# docker run -it nvcr.io/nvidia/l4t-base:r36.2.0
root@ab2e1a6244ae:/# apt update
Get:1 https://repo.download.nvidia.com/jetson/common r36.2 InRelease [2555 B]
Get:2 http://ports.ubuntu.com/ubuntu-ports jammy InRelease [270 kB]
Get:3 https://repo.download.nvidia.com/jetson/common r36.2/main arm64 Packages [24.3 kB]
Get:4 http://ports.ubuntu.com/ubuntu-ports jammy-updates InRelease [128 kB]
Get:5 http://ports.ubuntu.com/ubuntu-ports jammy-backports InRelease [127 kB]
Get:6 http://ports.ubuntu.com/ubuntu-ports jammy-security InRelease [129 kB]
Get:7 http://ports.ubuntu.com/ubuntu-ports jammy/restricted arm64 Packages [24.2 kB]
Get:8 http://ports.ubuntu.com/ubuntu-ports jammy/main arm64 Packages [1758 kB]
Get:9 http://ports.ubuntu.com/ubuntu-ports jammy/universe arm64 Packages [17.2 MB]
Get:10 http://ports.ubuntu.com/ubuntu-ports jammy/multiverse arm64 Packages [224 kB]
Get:11 http://ports.ubuntu.com/ubuntu-ports jammy-updates/multiverse arm64 Packages [30.6 kB]
Get:12 http://ports.ubuntu.com/ubuntu-ports jammy-updates/restricted arm64 Packages [3104 kB]
Get:13 http://ports.ubuntu.com/ubuntu-ports jammy-updates/main arm64 Packages [2619 kB]
Get:14 http://ports.ubuntu.com/ubuntu-ports jammy-updates/universe arm64 Packages [1488 kB]
Get:15 http://ports.ubuntu.com/ubuntu-ports jammy-backports/main arm64 Packages [81.0 kB]
Get:16 http://ports.ubuntu.com/ubuntu-ports jammy-backports/universe arm64 Packages [33.3 kB]
Get:17 http://ports.ubuntu.com/ubuntu-ports jammy-security/universe arm64 Packages [1195 kB]
Get:18 http://ports.ubuntu.com/ubuntu-ports jammy-security/main arm64 Packages [2320 kB]
Get:19 http://ports.ubuntu.com/ubuntu-ports jammy-security/restricted arm64 Packages [2977 kB]
Get:20 http://ports.ubuntu.com/ubuntu-ports jammy-security/multiverse arm64 Packages [24.2 kB]
Fetched 33.8 MB in 6s (5791 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
122 packages can be upgraded. Run 'apt list --upgradable' to see them.
W: https://repo.download.nvidia.com/jetson/common/dists/r36.2/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
root@ab2e1a6244ae:/# apt install nvidia-tensorrt
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
cuda-toolkit-12-2-config-common cuda-toolkit-12-config-common cuda-toolkit-config-common libcublas-12-2 libcudnn8 libnvinfer-dispatch8
libnvinfer-lean8 libnvinfer-plugin8 libnvinfer-vc-plugin8 libnvinfer8 libnvonnxparsers8 libnvparsers8 tensorrt-libs
The following NEW packages will be installed:
cuda-toolkit-12-2-config-common cuda-toolkit-12-config-common cuda-toolkit-config-common libcublas-12-2 libcudnn8 libnvinfer-dispatch8
libnvinfer-lean8 libnvinfer-plugin8 libnvinfer-vc-plugin8 libnvinfer8 libnvonnxparsers8 libnvparsers8 nvidia-tensorrt tensorrt-libs
0 upgraded, 14 newly installed, 0 to remove and 122 not upgraded.
Need to get 943 MB of archives.
After this operation, 2616 MB of additional disk space will be used.
After this operation, 2616 MB of additional disk space will be used.
Do you want to continue? [Y/n] n
Abort.
root@ab2e1a6244ae:/# apt install nvidia-cuda
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
cuda-compat-12-2 : PreDepends: nvidia-l4t-core but it is not installable
nvidia-l4t-cudadebuggingsupport : Depends: nvidia-l4t-cuda (> 36.0.0-0) but it is not installable
Depends: nvidia-l4t-cuda (< 37.0.0-0) but it is not installable
E: Unable to correct problems, you have held broken packages.
Another note worthy mentioning. I just realized that the “general” tensorflow images seem to support Jetson architectures now. The information is somewhat hidden in this article here: TensorFlow For Jetson Platform - NVIDIA Docs. I tested it and it worked.
The tensorflow installation here seems to be outdated. However I tested that way of installing tensorflow on the host and it seems to work okay as well. That said I couldn’t figure out how to build tensorflow in a container starting off from a l4t-base image. Any guidance would be appreciated. My attempt was this:
FROM nvcr.io/nvidia/l4t-base:r36.2.0
# these seem to be the minimal packages
RUN apt update && \
apt install -yq --no-install-recommends \
libcudla-12-2 \
nvidia-tensorrt-dev \
nvidia-cudann && \
apt clean && \
rm -rf /var/lib/apt/lists/*
# to satisfy tensorrt8 requirement in that image we use v60
RUN pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v60 tensorflow==2.16.1+nv24.07
ENV CUDA_HOME="/usr/local/cuda"
ENV PATH="/usr/local/cuda/bin:${PATH}"
ENV LD_LIBRARY_PATH="/usr/lib/aarch64-linux-gnu:/usr/local/cuda/lib64:${LD_LIBRARY_PATH}"
but this is what I got inside the image:
root@7d34b5a49402:/workspace# python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
2025-03-04 20:43:25.894738: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.
2025-03-04 20:43:25.906065: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2251] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
[]
Since we Linux distros from JetPack 6.1, we don’t release l4t-base images anymore.
If you want to use a newer BSP (ex. r36.4.3), please find the l4t-cuda instead.
@AastaLLL Thanks for the answer. Unfortunately, I do not understand this. How do I know which l4t-cuda image belongs to which Jetpack version? There is no version matrix. Are you saying they all work? I tried nvcr.io/nvidia/l4t-cuda:12.6.11-runtime and there is no TensorRT (on apt at least). How would I install that? What version of tensorflow would I use?
There is a l4t-jetpack image that has a 36.4.0 release, and has tensorrt10 included, however that one is a whooping 10GB and won’t even fit on the AGX Orin internal hard drive.
At this point the Jetpack docker ecosystem seems quite fragmented and poorly documented. It is really hard to figure out which images to use, and which images are outdated. How do I know which images belongs to which version of Jetpack? It is not even clear which images is build for it and which is not. The above “general” tensorflow image seems to work fine, but there is no mention on the jetson documentation. I feel working on jetson systems with docker has become very challenging. Could you elaborate on:
What is a minimal container that is compatible with modern Jetpack (>=6)? Often you don’t want a catch it all docker container, you want one with the minimal set of tools necessary to run a certain application so that you can pass it along to collaborators and customers. Does “since we Linux distro from Jetpack 6.1” mean I should just use the normal ubuntu image?
$ docker run -it ubuntu:jammy
root@2d618b423d0c:/# uname -a
Linux 2d618b423d0c 5.15.148-tegra #1 SMP PREEMPT Tue Jan 7 17:14:38 PST 2025 aarch64 aarch64 aarch64 GNU/Linux
That seems to be indeed the case as far as I can say.
2) How to build out such a minimal docker to include more feature (install cuda, tensorrt, pytorch, tensorflow, deepstream, ros, etc)? Do I just add the jetpack ppa for that jetpack version?
I am running 10+ dockers simultaneously on some of our systems and having tailored images for each task is essential.
If you need the TensorRT, you can find the l4t-tensorrt as well.
The BSP and JetPack version info can be found in the link below.
The subpage of each JetPack version also lists all the libraries in detail.
Standard Ubuntu image should work on JetPack 6.
But if you want the GPU/CUDA support, there are some extra drivers and libraries required.
That’s why the base image we recommend is l4t-cuda.
You can install all the required libraries (ex. TensorRT) on the top of the image.
You can manually install the libraries or use our prebuilt.
Some examples can be found in the jetson-container: