• dGPU • Deepstream 6.4 • 552.74 • Questions • Why do we need to do this? We would like to automate the the build our of Deepstream application for both x86_64 DGPU and ARM64 Jetson platforms on one workstation. We have a custom gst plugin that we based off of gst-dsexample that creates a PIP(Picture-in-Picture) with a scaled close up of the detected object in the bottom right of the frame. • How to reproduce the issue ?
We are looking to automate the build of our Jetson Deepstream Docker Image on an x86_64 platform using QEMU. First thing that stands out is that when I run the following image on my x86_64 device is I have broken links for libnvbufsurface and libnvbufsurftransform: docker run --platform linux/arm64 -it --runtime=nvidia nvcr.io/nvidia/deepstream:6.4-triton-multiarch /bin/bash cd /opt/nvidia/deepstream/deepstream-6.4/lib ls -l # Show’s the broken linkage because the /usr/lib/aarch64-linux-gnu/tegra is broken or empty
Looking around in this docker image the /usr/lib/aarch64-linux-gnu/tegra folder seems to be missing or broken as it has a “red” look.
A simple reproduce is to try and build gst-example plugin like so: This assumes you are inside of the above container.
cd /opt/nvidia/deepstream/deepstream-6.4/sources/gst-plugins/gst-dsexample
# Install dependencies for plugin
apt-get update
apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev libopencv-dev
export CUDA_VER=12.2
make -j
You will see that it fails to build because it can’t find libnvbufsurface and libnvbufsurftransform. Is it possible to cross-compile like this? Where does the /usr/lib/aarch64-linux-gnu/tegra folder come from exactly?
Interesting, thanks for the information. So where exactly should I copy them into? The same location the drivers.csv specifies which is here? sym, /usr/lib/aarch64-linux-gnu/nvidia/libnvbufsurface.so sym, /usr/lib/aarch64-linux-gnu/nvidia/libnvbufsurftransform.so
Copy the shared library to the corresponding location of docker, and then save the image
Such as add the following entries in the dockerfile, then build and export the custom image.
There were a lot of dependencies that I also had to pull in because I was getting build errors when only copying in libnvbufsurftransform/libnvbufsurface, in my Dockerfile I had to add the following:
# Copy all required library files to both directories
COPY tegra/libnvbufsurface.so.1.0.0 /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvbufsurftransform.so.1.0.0 /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvrm_mem.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvrm_surface.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvrm_chip.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvos.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvbuf_fdmap.so.1.0.0 /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvrm_gpu.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvrm_host1x.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvvic.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvdla_compiler.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvcudla.so* /usr/lib/aarch64-linux-gnu/nvidia/
# Additional dependencies
COPY tegra/libnvsciipc.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvsocsys.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvrm_sync.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvtegrahv.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvrm_stream.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvcolorutil.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libcuda.so* /usr/lib/aarch64-linux-gnu/nvidia/
COPY tegra/libnvdla_runtime.so* /usr/lib/aarch64-linux-gnu/nvidia/
# Copy the same files to tegra directory
COPY tegra/libnvbufsurface.so.1.0.0 /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvbufsurftransform.so.1.0.0 /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvrm_mem.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvrm_surface.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvrm_chip.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvos.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvbuf_fdmap.so.1.0.0 /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvrm_gpu.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvrm_host1x.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvvic.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvdla_compiler.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvcudla.so* /usr/lib/aarch64-linux-gnu/tegra/
# Additional dependencies in tegra
COPY tegra/libnvsciipc.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvsocsys.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvrm_sync.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvtegrahv.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvrm_stream.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvcolorutil.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libcuda.so* /usr/lib/aarch64-linux-gnu/tegra/
COPY tegra/libnvdla_runtime.so* /usr/lib/aarch64-linux-gnu/tegra/
# Create symbolic links for all libraries in nvidia directory
RUN cd /usr/lib/aarch64-linux-gnu/nvidia && \
for f in *.so.*; do \
base=$(echo $f | sed 's/\([^.]*\).so.*/\1.so/'); \
ln -sf $f $base; \
done
# Create symbolic links for all libraries in tegra directory
RUN cd /usr/lib/aarch64-linux-gnu/tegra && \
for f in *.so.*; do \
base=$(echo $f | sed 's/\([^.]*\).so.*/\1.so/'); \
ln -sf $f $base; \
done
RUN ldconfig
Is there a better way then this? I got all of these from the Jetson devices /usr/lib/aarch64-linux-gnu/tegra/ folder.
Running the Docker image I cross-compiled I am seeing a couple warnings that I don’t see when building the image on the Jetson device:
xod-optical-deepstream-onvif | /bin/bash: line 1: lsmod: command not found
xod-optical-deepstream-onvif | /bin/bash: line 1: modprobe: command not found
Yeah it looks like not having lsmod and modprobe cause the gst-dsexample to fail at runtime because it’s unable to determine that the GPU is integrated instead of dGPU. I am getting this runtime error:
xod-optical-deepstream-onvif | /dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:4550: => Surface type not supported for transformation NVBUF_MEM_CUDA_PINNED
xod-optical-deepstream-onvif |
xod-optical-deepstream-onvif | 0:00:13.736806334 1 0xffff1c001d20 ERROR nvvideoconvert gstnvvideoconvert.c:4208:gst_nvvideoconvert_transform: buffer transform failed
I have narrowed it down to the gst-dsexample plugin(we call pip(Picture-in-Picture)). Because if I disable the plugin everything runs as expected. I think because we are missing modprobe and lsmod it’s not able to determine the GPU type here:
I guess you might be building this docker image on an x86 platform?
In fact, when executing docker pull nvcr.io/nvidia/deepstream:xxx-triton-multiarch, different layers will be pulled from ngc depending on the platform.
On Jetson
Save the following content as cross.dockerfile.
FROM nvcr.io/nvidia/deepstream:7.1-triton-multiarch
RUN mkdir /usr/lib/aarch64-linux-gnu/nvidia/
COPY libnvbufsurface.so /usr/lib/aarch64-linux-gnu/nvidia/
COPY libnvbufsurface.so.1.0.0 /usr/lib/aarch64-linux-gnu/nvidia/
COPY libnvbufsurftransform.so /usr/lib/aarch64-linux-gnu/nvidia/
COPY libnvbufsurftransform.so.1.0.0 /usr/lib/aarch64-linux-gnu/nvidia/
403.4 /usr/bin/ld: warning: libnvrm_mem.so, needed by /opt/nvidia/deepstream/deepstream-6.4/lib/libnvbufsurface.so, not found (try using -rpath or -rpath-link)
403.4 /usr/bin/ld: warning: libnvrm_surface.so, needed by /opt/nvidia/deepstream/deepstream-6.4/lib/libnvbufsurface.so, not found (try using -rpath or -rpath-link)
403.4 /usr/bin/ld: warning: libnvrm_chip.so, needed by /opt/nvidia/deepstream/deepstream-6.4/lib/libnvbufsurface.so, not found (try using -rpath or -rpath-link)
403.4 /usr/bin/ld: warning: libnvos.so, needed by /opt/nvidia/deepstream/deepstream-6.4/lib/libnvbufsurface.so, not found (try using -rpath or -rpath-link)
403.4 /usr/bin/ld: warning: libnvbuf_fdmap.so.1.0.0, needed by /opt/nvidia/deepstream/deepstream-6.4/lib/libnvbufsurface.so, not found (try using -rpath or -rpath-link)
403.4 /usr/bin/ld: warning: libnvrm_gpu.so, needed by /opt/nvidia/deepstream/deepstream-6.4/lib/libnvbufsurface.so, not found (try using -rpath or -rpath-link)
403.4 /usr/bin/ld: warning: libnvrm_host1x.so, needed by /opt/nvidia/deepstream/deepstream-6.4/lib/libnvbufsurftransform.so, not found (try using -rpath or -rpath-link)
403.4 /usr/bin/ld: warning: libnvvic.so, needed by /opt/nvidia/deepstream/deepstream-6.4/lib/libnvbufsurftransform.so, not found (try using -rpath or -rpath-link)
403.4 /usr/bin/ld: warning: libnvdla_compiler.so, needed by /lib/aarch64-linux-gnu/libnvinfer.so.8, not found (try using -rpath or -rpath-link)
403.5 /usr/bin/ld: warning: libnvcudla.so, needed by /usr/local/cuda/lib64/libcudla.so.1, not found (try using -rpath or -rpath-link)
Turns out it was the nvbuf-memory-type parameter not being set correctly to 0:
[pip]
enable=1
processing-width=640
processing-height=480
#batch-size for batch supported optimized plugin
#batch-size=1
unique-id=15
gpu-id=0
# Supported memory types are 1 and 3
nvbuf-memory-type=0
However, one thing I am noticing is that I am getting runtime errors when a bounding box is drawn on the screen. It runs and then I get an exited with code 139. I have made sure that all of my nvbuf-memory-type for each plugin is of type 0. Any ideas? Have you tested your cross compile gst-dsexample from above on an actual Jetson device?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks