NvMediaDlaGetMaxOutstandingRequests Error When Using Docker Container

Hi all,

I am having a problem with my custom Docker container on the newest JetPack 4.6.1. The container is basically a JetPack installation on top of nvcr.io/nvidia/l4t-base:r32.4.4 image and it includes CUDA 10.0, cuDNN 7.6.3, TensorRT 5.1.6. The reason I’m using this old image is that the inference performance is too bad after TensorFlow 1.13.1 with newer TensorRT versions.

I don’t have any problems when deploying this image on the Jetsons with up to JetPack version 4.5.x but having the problem below once I deploy it on JetPack 4.6.1:

# python3
Python 3.6.9 (default, Dec  8 2021, 21:08:43) 
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorrt
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.6/dist-packages/tensorrt/__init__.py", line 1, in <module>
    from .tensorrt import *
ImportError: /usr/lib/aarch64-linux-gnu/libnvinfer.so.5: undefined symbol: NvMediaDlaGetMaxOutstandingRequests

I can not even find anything about NvMediaDlaGetMaxOutstandingRequests on the internet except NVIDIA Drive platform resources. Docker containers supposed be a isolated environment from the host resources. I know --runtime nvidia maps some host components to the container but I can not get it working even I don’t use the flag.

What is changed on JetPack 4.6.1 that could effect this? Thanks in advance.

This is the beginning of my Dockerfile related to installation of JetPack components:

FROM nvcr.io/nvidia/l4t-base:r32.4.4

WORKDIR /root
ENV DEBIAN_FRONTEND noninteractive

# Essential Ubuntu-base installations
RUN apt update && \
    apt install -y --no-install-recommends \
    build-essential \
    cmake \
    make \
    gcc \
    g++ \
    pkg-config \
    unzip \
    yasm \
    git \
    checkinstall \
    python3-pip \
    python3-dev \
    python3-testresources \
    python3-cffi \
    wget \
    gnupg2 \
    libgail-common \
    libgail18 \
    libgtk2.0-0 \
    libgtk2.0-bin \
    libgtk2.0-common \
    libtbb2 \
    libv4l-dev \
    v4l-utils && \
    rm -rf /var/lib/apt/lists/*

# Clean the CUDA-10.2 resources under /usr/local
RUN cd /usr/local/ && \
    rm -rf cuda && \
    rm -rf cuda-10.2
    
COPY packages /jp43_packages

# Install JetPack 4.3 CUDA and CUDA-X libraries
RUN cd /jp43_packages && \
    dpkg -i cuda-repo-l4t-10-0-local-10.0.326_1.0-1_arm64.deb && \
    apt-key add /var/cuda-repo-10-0-local-10.0.326/7fa2af80.pub && \
    apt update && \
    apt install -y --no-install-recommends \
    cuda-cusparse-10-0 \
    cuda-cupti-10-0 \
    cuda-cusolver-10-0 \
    cuda-cufft-10-0 \
    cuda-cublas-10-0 \
    cuda-cublas-dev-10-0 \
    cuda-compiler-10-0 \
    cuda-cudart-10-0 \
    cuda-tools-10-0 \
    cuda-curand-10-0 \
    cuda-curand-dev-10-0 \
    cuda-nvcc-10-0 \
    cuda-libraries-10-0 && \
    ln -s /usr/local/cuda-10.0 /usr/local/cuda && \
    ln -s /usr/local/cuda-10.0/targets/aarch64-linux/lib/libcurand.so.10.0.326 /usr/local/cuda-10.0/targets/aarch64-linux/lib/libcurand.so.10 && \
    rm -rf /var/lib/apt/lists/*

ENV PATH="/usr/local/cuda-10.0/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

RUN cd /jp43_packages && \
    dpkg -i \
    libcudnn7_7.6.3.28-1+cuda10.0_arm64.deb \
    libcudnn7-dev_7.6.3.28-1+cuda10.0_arm64.deb \
    libnvinfer5_5.1.6-1+cuda10.0_arm64.deb \
    libnvinfer-dev_5.1.6-1+cuda10.0_arm64.deb \
    libnvinfer-samples_5.1.6-1+cuda10.0_all.deb \
    python3-libnvinfer_5.1.6-1+cuda10.0_arm64.deb \
    python3-libnvinfer-dev_5.1.6-1+cuda10.0_arm64.deb \
    uff-converter-tf_5.1.6-1+cuda10.0_arm64.deb \
    graphsurgeon-tf_5.1.6-1+cuda10.0_arm64.deb \
    tensorrt_5.1.6.1-1+cuda10.0_arm64.deb && \
    dpkg -i \
    OpenCV-4.1.1-2-gd5a58aa75-aarch64-libs.deb \
    OpenCV-4.1.1-2-gd5a58aa75-aarch64-dev.deb \
    OpenCV-4.1.1-2-gd5a58aa75-aarch64-python.deb && \
    rm -rf /var/lib/apt/lists/*

Hi,

Do you want to run an r32.4.4 container on a Jeston set up with r32.6.1?

Based on the error, it looks like a compatibility issue.
Please note that DLA is using some extra library located in the below folder:

/usr/lib/aarch64-linux-gnu/tegra/libnvdla_runtime.so
/usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so

So you may need to make sure the TensorRT library (/usr/lib/aarch64-linux-gnu/libnvinfer.so.5) and the DLA library above are compatible.

Thanks.

Hi @AastaLLL,

Thank your for your reply. That was exactly what I wanted to do. But I do not understand what is changed that would allow it on r32.5.x but not on the r32.6.1. Since the DLA capabilities are the same, what would be causing this issue?

Thanks.

Hi,

We keep DLA to be version 1.3.0 for a while.
And upgrade to 1.3.6 from JetPack 4.6.1.

So the software is different and might be the reason for this issue.
Thanks.

Hi,

Thanks for the answer. That is what I was looking for. Will there be a patch for the NvMediaDlaGetMaxOutstandingRequests issue since this update is causing a problem where it is against the containerized logic? Or is there any workaround?

Nobody will be able to use good old TF-TRT with TensorFlow 1.13.1 -which is the fastest combination right now- on newer JetPack versions anymore.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.