Containerization using docker of deepstream python script

I have created a deepstream pipeline using python. Now I want to containerize it using docker, I looked into different resources but couldn’t find a good approach.
I am using Jetson Box PC for the process, is there a documentation which is easier to understand and execute?

I don’t know what the Jetson Box PC is, if you want to build your own docker image, please refer to this project

Thanks for the response, I followed the instructions on the given link after which I was able to make to build the image but while running I came the error given below:

sudo docker run -it --rm --net=host --runtime nvidia --gpus all --name deepstream_container deepstream-l4t:7.1.0-triton-local
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running createContainer hook #1: exit status 1, stdout: , stderr: time=“2024-12-17T14:17:28+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/cb761a7f5cf4cf6a42c3cfdd4d4af7945fc08b0f5f08ea68030a5db9ff81d8a9/merged/usr/lib/aarch64-linux-gnu/gbm/nvidia-drm_gbm.so to …/tegra/libnvidia-allocator.so”
time=“2024-12-17T14:17:28+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/cb761a7f5cf4cf6a42c3cfdd4d4af7945fc08b0f5f08ea68030a5db9ff81d8a9/merged/usr/lib/aarch64-linux-gnu/nvidia/libnvidia-allocator.so to libnvidia-allocator.so.1”
time=“2024-12-17T14:17:28+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/cb761a7f5cf4cf6a42c3cfdd4d4af7945fc08b0f5f08ea68030a5db9ff81d8a9/merged/usr/lib/aarch64-linux-gnu/gbm/tegra_gbm.so to …/tegra/libnvidia-allocator.so”

1.nvidia-container-toolkit must be installed first.

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/index.html

  1. Are you starting docker on x86 or jetson? for jetson use --runtime nvidia, for x86 use --gpu all, respectively for different platforms.

Refer to this documentation.

I added installation steps for nvidia-container-toolkit, but still getting the same error. And I am building and starting the docker on jetson box pc.

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Please fill in the information above. I think this may be caused by Jetpack not being installed correctly.

If you run the official docker, does it work normally ?

docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-7.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream:7.1-triton-multiarch

I tried running the command for offical docker, It gives the same error. I have not connected any display to the jetson, I am using with the help of ssh. So. I removed the DISPLAY=$DISPLAY from the command.

If there is no display connected, it is ok to delete the $DISPLAY parameter, but the container should be created normally.

I think you need to re-flash the jetpack.

https://docs.nvidia.com/sdk-manager/install-with-sdkm-jetson/index.html

Hardware -
NVIDIA Jetson Xavier NX - AVerMedia NX215 - Jetpack 5.1

We have a dev kit too, the offical docker image runs there with no issues but on the Avermedia Nx 215B based Jetson Xavier nx it gives the above error.

If the vendor cannot upgrade to Jetpack 6.1, You may only be able to run the DS-6.2 image on this board.

You also need to build your own image based on the DS-6.2 branch.

I tried using ds 6.2 but I am still facing the same issue.
I have pasted the log below.

[+] Building 2.8s (23/23) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 6.01kB 0.0s
=> [internal] load metadata for nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel 2.5s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [ 1/18] FROM nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel@sha256:8dc63c3ce42b233ad5de1d4ce6d7e59cd9b899b83c92ce50c832efba9e3 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 112B 0.0s
=> CACHED [ 2/18] RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends linux-li 0.0s
=> CACHED [ 3/18] RUN mkdir -p /opt/tritonclient/ && mkdir -p /tmp/temp_triton99 && mkdir -p /opt/proto && mkdir 0.0s
=> CACHED [ 4/18] RUN mkdir -p /lib/firmware 0.0s
=> CACHED [ 5/18] RUN apt-get update && apt-get install -y --no-install-recommends nvidia-vpi-dev && rm -rf /var/lib/ 0.0s
=> CACHED [ 6/18] RUN rm -f /usr/lib/aarch64-linux-gnu/libavresample* /usr/lib/aarch64-linux-gnu/libavutil* /usr/lib/aarc 0.0s
=> CACHED [ 7/18] RUN rm -f /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstaudioparsers.so 0.0s
=> CACHED [ 8/18] RUN rm -f /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstfaad.so 0.0s
=> CACHED [ 9/18] RUN rm -f /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstvoaacenc.so /usr/lib/aarch64-linux-gnu/gstream 0.0s
=> CACHED [10/18] RUN rm -f /usr/lib/aarch64-linux-gnu/libmpg123.so* /usr/lib/aarch64-linux-gnu/libvpx.so* /usr/lib/aarch64-l 0.0s
=> CACHED [11/18] RUN DEBIAN_FRONTEND=noninteractive apt-get purge -y gstreamer1.0-libav 0.0s
=> CACHED [12/18] WORKDIR /opt/nvidia/deepstream/deepstream 0.0s
=> CACHED [13/18] ADD user_additional_install_devel.sh /opt/ 0.0s
=> CACHED [14/18] ADD user_deepstream_python_apps_install.sh /opt/ 0.0s
=> CACHED [15/18] RUN ln -s /usr/src/tensorrt/bin/trtexec /usr/bin/trtexec 0.0s
=> CACHED [16/18] RUN ldconfig 0.0s
=> CACHED [17/18] RUN sed -i ‘$d’ /etc/apt/sources.list 0.0s
=> CACHED [18/18] RUN rm -f /etc/apt/sources.list.d/cuda.list 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:7f4865bc7b7e05222d1126df94a2d76f65a0d8594deb1bb2f2ef9b270497b2f7 0.0s
=> => naming to Docker Hub Container Image Library | App Containerization 0.0s
Error response from daemon: No such container: ds_build_triton
Error response from daemon: No such container: ds_build_triton
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running createContainer hook #1: exit status 1, stdout: , stderr: time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/gbm/nvidia-drm_gbm.so to …/tegra/libnvidia-allocator.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvidia-allocator.so to libnvidia-allocator.so.1”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/gbm/tegra_gbm.so to …/tegra/libnvidia-allocator.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/gbm/tegra-udrm_gbm.so to …/tegra/libnvidia-allocator.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/libcuda.so to tegra/libcuda.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libcuda.so to libcuda.so.1.1”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/libgstreamer-1.0.so.0.1603.99999 to tegra/libnvgstreamer-1.0.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/libnvcucompat.so to tegra/libnvcucompat.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/libv4l2.so.0.0.999999 to tegra/libnvv4l2.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/libv4lconvert.so.0.0.999999 to tegra/libnvv4lconvert.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvargus.so to …/…/…/tegra/libv4l2_nvargus.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvcuvidvideocodec.so to …/…/…/tegra/libv4l2_nvcuvidvideocodec.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvideocodec.so to …/…/…/tegra/libv4l2_nvvideocodec.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/libvulkan.so.1.3.204 to tegra/libvulkan.so.1.3.204”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1 to libcuda.so.1.1”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libgstnvdsseimeta.so to libgstnvdsseimeta.so.1.0.0”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libgstreamer-1.0.so.0 to libnvgstreamer-1.0.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so to libnvbufsurface.so.1.0.0”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so to libnvbufsurftransform.so.1.0.0”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvbuf_utils.so to libnvbuf_utils.so.1.0.0”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvdsbufferpool.so to libnvdsbufferpool.so.1.0.0”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvidia-egl-gbm.so.1 to libnvidia-egl-gbm.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvidia-egl-wayland.so.1 to libnvidia-egl-wayland.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvidia-kms.so to libnvidia-kms.so.35.2.1”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvidia-nvvm.so.4 to libnvidia-nvvm.so.35.2.1”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvidia-ptxjitcompiler.so.1 to libnvidia-ptxjitcompiler.so.35.2.1”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvidia-vksc-core.so to libnvidia-vksc-core.so.35.2.1”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvid_mapper.so to libnvid_mapper.so.1.0.0”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvscibuf.so to libnvscibuf.so.1”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvscicommon.so to libnvscicommon.so.1”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvscistream.so to libnvscistream.so.1”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libnvscisync.so to libnvscisync.so.1”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libv4l2.so.0 to libnvv4l2.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libv4lconvert.so.0 to libnvv4lconvert.so”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libvulkansc.so to libvulkansc.so.1”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libvulkansc.so.1 to libvulkansc.so.1.0.10”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/lib/aarch64-linux-gnu/tegra/libvulkan.so.1 to libvulkan.so.1.3.204”
time=“2024-12-18T12:36:09+05:30” level=info msg=“Symlinking /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/share/glvnd/egl_vendor.d/10_nvidia.json to …/…/…/lib/aarch64-linux-gnu/tegra-egl/nvidia.json”
time=“2024-12-18T12:36:09+05:30” level=error msg=“failed to create link […/…/…/lib/aarch64-linux-gnu/tegra-egl/nvidia.json /usr/share/glvnd/egl_vendor.d/10_nvidia.json]: failed to create symlink: failed to remove existing file: remove /var/lib/docker/overlay2/0833dc9ee754c6ee3b8b08b61ba6562323f146a0a4b4dae80a9b7cbeff39c7a2/merged/usr/share/glvnd/egl_vendor.d/10_nvidia.json: device or resource busy”: unknown.
make: *** […/common/Makefile:55: triton] Error 125

It’s strange. I don’t know what happened. This problem seems to be related to the manufacturer

First ensure that the official docker image can run correctly,then try to build it yourself.

Can the following command line work normally on AVerMedia NX215?

unset DISPLAY

docker system prune --volumes -a

docker run -it --rm --net=host --runtime nvidia   -w /opt/nvidia/deepstream/deepstreamnvcr.io/nvidia/deepstream:6.2-triton

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.