Missing libnvdla_compiler.so for trtexec in nvcr.io/nvidia/l4t-tensorrt:r8.4.1-runtime

Trying to run trtexec on Jetson Orin with R35.1 / Jetpack 5.0.2 in docker.

Minimal Dockerfile:

FROM nvcr.io/nvidia/l4t-base:r35.1.0
RUN echo "deb https://repo.download.nvidia.com/jetson/t234 r35.1 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
RUN apt-get update && apt-get -y install cmake wget nvidia-cuda nvidia-tensorrt && apt-get clean && rm -rf /var/lib/apt/lists/*

But trtexec is missing libraries:

root@2a67591e5388:# ./usr/src/tensorrt/bin/trtexec
./trtexec: error while loading shared libraries: libnvdla_compiler.so: cannot open shared object file: No such file or directory

Also tried to install full nvidia-jetpack-runtime instead, but apt fails during docker build:

(Reading database ... 34973 files and directories currently installed.)
#0 292.8 Preparing to unpack .../nvidia-l4t-core_35.1.0-20220825113828_arm64.deb ...
#0 292.8 /var/lib/dpkg/tmp.ci/preinst: line 41: /proc/device-tree/compatible: No such file or directory
#0 292.8 dpkg: error processing archive /var/cache/apt/archives/nvidia-l4t-core_35.1.0-20220825113828_arm64.deb (--unpack):
#0 292.8  new nvidia-l4t-core package pre-installation script subprocess returned error exit status 1
#0 292.8 Errors were encountered while processing:
#0 292.8  /var/cache/apt/archives/nvidia-l4t-core_35.1.0-20220825113828_arm64.deb
#0 293.1 E: Sub-process /usr/bin/dpkg returned an error code (1)
------
failed to solve: executor failed running [/bin/sh -c apt-get update && apt-get -y install cmake wget nvidia-jetpack-runtime && apt-get clean && rm -rf /var/lib/apt/lists/*]: exit code: 100

Edit: Same error in nvcr.io/nvidia/l4t-tensorrt:r8.4.1-runtime:

docker run -it nvcr.io/nvidia/l4t-tensorrt:r8.4.1-runtime /bin/bash

root@7d5ea2f03322:/usr/src/tensorrt/bin# ./trtexec 
./trtexec: error while loading shared libraries: libnvdla_compiler.so: cannot open shared object file: No such file or directory

Hi,

The trtexec of l4t-tensorrt:r8.4.1-runtime can work well in our environment.
Could you check it again?

  1. Setup Orin with JetPack 5.0.2
  2. Launch container:
$ sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-tensorrt:r8.4.1-runtime
# /usr/src/tensorrt/bin/trtexec -h
&&&& RUNNING TensorRT.trtexec [TensorRT v8401] # /usr/src/tensorrt/bin/trtexec -h
=== Model Options ===
  --uff=<file>                UFF model
  --onnx=<file>               ONNX model
  --model=<file>              Caffe model (default = no model, random weights used)
  --deploy=<file>             Caffe prototxt file
  --output=<name>[,<name>]*   Output names (it can be specified multiple times); at least one output is required for UFF and Caffe
  --uffInput=<name>,X,Y,Z     Input blob name and its dimensions (X,Y,Z=C,H,W), it can be specified multiple times; at least one is required for UFF models
  --uffNHWC                   Set if inputs are in the NHWC layout instead of NCHW (use X,Y,Z=H,W,C order in --uffInput)

...
=== Help ===
  --help, -h                  Print this message

Thanks.

Hello @AastaLLL

Indeed, setting the runtime to nvidia was the missing part. Problem solved. Thanks for the quick response.

What is the best way to find out which libraries are mounted from the host into the container?
E.g.: is it even necessary to install nvidia-cuda for the l4t-base image if we want to run tensorrt? Or is cuda mounted if the host has it installed?

From what I understood from the updated 5.0.2 documentation, no libraries at all are mounted from the host.
That’s why we didn’t consider setting the container runtime in the first place when getting a shared library error.

Hi,

Yes, CUDA-related libraries are included directly in the container from JetPack 5.0.2.
But running with nvidia runtime will enable the access of some BSP libraries.

The library list can be found in the below file:

/etc/nvidia-container-runtime/host-files-for-container.d/l4t.csv

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.