NVIDIA L4T TensorRT containers with libnvinfer-dev

The NVIDIA L4T TensorRT containers only come with runtime variants. We compile TensorRT plugins in those containers and are currently unable to do so because include headers are missing.

Is there a plan to support a l4t-tensorrt version which not only ships the runtime but the full install? Similar to the non tegra tensorrt base image? Bonus: having the same versioning (e.g. 22.04) as the ngc tensorrt container for servers would be a very good addition as well.

We need this to get started with porting our stack to Jetson Orin and Jetpack 5.0. The l4t-base image no longer mounts the host TensorRT install, but the l4t-tensorrt does not have the necessary headers, so this is kinda messed up now…

From recent post maybe @AastaLLL can you help out here? It would be very much appreciated.

Hi,

This looks like a Jetson issue. Please refer to the below samples in case useful.

For any further assistance, we will move this post to to Jetson related forum.

Thanks!

@NVES issue moved to Jetson AGX Orin and tagged with TensorRT.
The two provided links are not relevant for the discussion.

The key issue here is how TensorRT is no longer mounted from the docker host (jetpack install) in the most recent versions of the L4T base container images.

While this is a very welcomed change (makes everything a little more portable) we are lacking a l4t-tensorrt image with a full install so we can run certain build commands for TensorRT plugins in our multi-stage docker build.

@NVES if you know who could help here feel free to ping the colleagues. Thanks.

Tried installing libnvinfer-dev inside the container but it points to internal repository http://cuda-internal.nvidia.com/release-candidates/kitpicks/tensorrt-rel-8-4-tegra/8.4.0/001/repos/l4t/arm64/InRelease which does not help.

root@jetson-orin:/# apt update
Err:1 http://cuda-internal.nvidia.com/release-candidates/kitpicks/tensorrt-rel-8-4-tegra/8.4.0/001/repos/l4t/arm64 InRelease
Could not resolve ‘cuda-internal.nvidia.com
Get:2 Index of /ubuntu-ports focal InRelease [265 kB]
Get:3 Index of /ubuntu-ports focal-updates InRelease [114 kB]
Get:4 Index of /ubuntu-ports focal-backports InRelease [108 kB]
Get:5 Index of /ubuntu-ports focal-security InRelease [114 kB]
Get:6 Index of /ubuntu-ports focal/main arm64 Packages [1234 kB]
Get:7 Index of /ubuntu-ports focal/restricted arm64 Packages [1317 B]
Get:8 Index of /ubuntu-ports focal/universe arm64 Packages [11.1 MB]
Get:9 Index of /ubuntu-ports focal/multiverse arm64 Packages [139 kB]
Get:10 Index of /ubuntu-ports focal-updates/main arm64 Packages [1549 kB]
Get:11 Index of /ubuntu-ports focal-updates/universe arm64 Packages [1091 kB]
Get:12 Index of /ubuntu-ports focal-updates/multiverse arm64 Packages [9066 B]
Get:13 Index of /ubuntu-ports focal-updates/restricted arm64 Packages [4157 B]
Get:14 Index of /ubuntu-ports focal-backports/universe arm64 Packages [26.0 kB]
Get:15 Index of /ubuntu-ports focal-backports/main arm64 Packages [51.2 kB]
Get:16 Index of /ubuntu-ports focal-security/restricted arm64 Packages [3916 B]
Get:17 Index of /ubuntu-ports focal-security/universe arm64 Packages [807 kB]
Get:18 Index of /ubuntu-ports focal-security/multiverse arm64 Packages [3254 B]
Get:19 Index of /ubuntu-ports focal-security/main arm64 Packages [1189 kB]
Fetched 17.8 MB in 2s (11.5 MB/s)
Reading package lists… Done
Building dependency tree
Reading state information… Done
37 packages can be upgraded. Run ‘apt list --upgradable’ to see them.
W: Failed to fetch http://cuda-internal.nvidia.com/release-candidates/kitpicks/tensorrt-rel-8-4-tegra/8.4.0/001/repos/l4t/arm64/InRelease Could not resolve ‘cuda-internal.nvidia.com
W: Some index files failed to download. They have been ignored, or old ones used instead.
root@jetson-orin:/# apt install libnvinfer-dev
Reading package lists… Done
Building dependency tree
Reading state information… Done
Package libnvinfer-dev is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
libnvinfer-bin

E: Package ‘libnvinfer-dev’ has no installation candidate

Hi,

We have filed an internal request for the devel version TensorRT container.
Will share more information with you once we got feedback.

Thanks.

1 Like

Hi,

A temporal workaround is to create the container on the top of l4t-base manually.
For example, we can build the trtexec sample on the container below:

$ sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-base:r34.1
$ apt update && apt install g++ make
$ echo "deb https://repo.download.nvidia.com/jetson/t234 r34.1 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
$ apt update
$ apt install nvidia-tensorrt
$ apt install nvidia-cuda

Thanks.

Thanks @AastaLLL
We can use the provided workaround for a multi-stage docker build with l4t-base for building the engine and l4t-tensorrt runtime image for deployment. For now we are set.
A develop l4t-tensorrt version still would be useful to have, happy to hear news on that internal ticket.

Hi @philipp.schmidt, I have dockerfile here which you can build and installs the dev packages for CUDA Toolkit, cuDNN, and TensorRT: https://github.com/dusty-nv/jetson-containers/blob/master/Dockerfile.jetpack

You can build it with scripts/docker_build_jetpack.sh

1 Like

Hi @dusty_nv
that’s very useful, thanks. Especially having opencv with cuda is a nice addition.

Hi, @philipp.schmidt

Got some feedback from our internal team.
We have an ML container that has all the dev components installed.
Is it enough for you or a devel tensorrt container is better?

Thanks.

Hello @AastaLLL
Which container is this? l4t-ml I suppose?
According to the description of that container it wasn’t clear to me that it ships with TensorRT.

image

We can use l4t-ml as the build stage and then ship with l4t-tensorrt:runtime, so we are good.
Would probably be helpful for others to have a reference to l4t-ml in the NGC documentation though.

Thanks for the fast responses the last few days. Very much appreciated.

I just saw that l4t-ml is from this repo of @dusty_nv jetson-containers, so now everything makes sense :)

Yes, and starting on R34.1 and newer, the l4t-ml container is based on that new jetpack dockerfile I pointed you to, so both of them have the dev packages for CUDA/cuDNN/TensorRT inside (and the same goes for l4t-pytorch and l4t-tensorflow)

1 Like