DeepStream 6.4 - Gst-nvstreammux Plugin Hangs Indefinitely

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
CPU
• DeepStream Version
6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
Driver Version: 545.29.06
• Issue Type( questions, new requirements, bugs)
bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

docker run --gpus all -it --name  deepstream_6.4 --net host  --ipc=host   -v /storage/deepstream/apps/:/apps nvcr.io/nvidia/deepstream:6.4-triton-multiarch

cd /opt/nvidia/deepstream/deepstream/
./user_deepstream_python_apps_install.sh --build-bindings

git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.git

cd /apps/deepstream_python_apps/apps/deepstream-nvdsanalytics/

python3 deepstream_nvdsanalytics.py /opt/nvidia/deepstream/deepstream-6.4/samples/streams/sample_720p.h264

Output:

(python3:2953): GStreamer-WARNING **: 17:05:50.083: External plugin loader failed. This most likely means that the plugin loader helper binary was not found or could not be run. You might need to set the GST_PLUGIN_SCANNER environment variable if your setup is unusual. This should normally not be required though.

(python3:2953): GStreamer-WARNING **: 17:05:50.085: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.1: cannot open shared object file: No such file or directory
Creating Pipeline

Creating streamux

Output with GST_DEBUG=5

output_debug.txt (4.1 MB)

The Gst-nvstreammux plugin is hanging when used with Python bindings. It hangs when declaring the variable below:

streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")

For this sample, you need to enter the uri, so the correct parameters should be

file:///opt/nvidia/deepstream/deepstream-6.4/samples/streams/sample_720p.h264

In addition, I am not sure whether it is related to the driver version. The recommended driver version for DS-6.4 is R535.104.12.

I ran your steps on a server with driver version 535 and there were no problems.

I downgraded the driver version to R535 and modified the code to facilitate debugging, but the issue persists. The process freezes when declaring Gst.ElementFactory.make("nvstreammux", "Stream-muxer") .

The same code/procedure (pyds 1.1.8) on deepstream 6.3 on this host works fine.

I started new container :

docker run --gpus all \
                  -it \
                  --name  deepstream_test_ds6.4 \
                  --net host   \
                  nvcr.io/nvidia/deepstream:6.4-triton-multiarch

Command executed inside container

===============================
   DeepStreamSDK 6.4.0
===============================

*** LICENSE AGREEMENT ***
By using this software you agree to fully comply with the terms and conditions
of the License Agreement. The License Agreement is located at
/opt/nvidia/deepstream/deepstream/LicenseAgreement.pdf. If you do not agree
to the terms and conditions of the License Agreement do not use the software.


=============================
== Triton Inference Server ==
=============================

NVIDIA Release 23.08 (build 66820947)
Triton Server Version 2.37.0

Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

root@omnidev:/opt/nvidia/deepstream/deepstream-6.4#

Nvidia-smi

Fri Jan  5 13:36:24 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.146.02             Driver Version: 535.146.02   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 4090        Off | 00000000:09:00.0 Off |                  Off |
| 36%   28C    P8              35W / 450W |   1365MiB / 24564MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
+---------------------------------------------------------------------------------------+

Instalation of python pyds

root@omnidev:/opt/nvidia/deepstream/deepstream-6.4# ./user_deepstream_python_apps_install.sh --build-bindings
####################################
Downloading necessary pre-requisites
####################################
Get:1 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64  InRelease [1581 B]
Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease
Get:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
Get:4 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Get:5 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64  Packages [643 kB]
.
.
.
  Preparing metadata (setup.py) ... done
Requirement already satisfied: PyGObject in /usr/lib/python3/dist-packages (from pyds==1.1.10) (3.42.1)
Collecting pycairo>=1.16.0 (from PyGObject->pyds==1.1.10)
  Downloading pycairo-1.25.1.tar.gz (347 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 347.1/347.1 kB 1.1 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: pgi, pycairo
  Building wheel for pgi (setup.py) ... done
  Created wheel for pgi: filename=pgi-0.0.11.2-py3-none-any.whl size=181777 sha256=b1b64617d9573874898e6f771df59451235007a3e0bf84cd8e21786b14ea1c2e
  Stored in directory: /root/.cache/pip/wheels/fc/c3/1b/a1f2776e8cf1a8a190322b87dfd9d4153fd3d78c899d58515d
  Building wheel for pycairo (pyproject.toml) ... done
  Created wheel for pycairo: filename=pycairo-1.25.1-cp310-cp310-linux_x86_64.whl size=320972 sha256=531678177ca3e150ecb21c7266449c4a069c58dbef0352a0b03ca844b39048cf
  Stored in directory: /root/.cache/pip/wheels/d6/d8/c4/9bb1adbc349a349ed4718627f0afffeae26d9982060568cd30
Successfully built pgi pycairo
Installing collected packages: pgi, pycairo, pyds
Successfully installed pgi-0.0.11.2 pycairo-1.25.1 pyds-1.1.10
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

[notice] A new release of pip is available: 23.3.1 -> 23.3.2
[notice] To update, run: python3 -m pip install --upgrade pip



the code:
test.py

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import gi
gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst

import pyds

# Standard GStreamer initialization
Gst.init(None)

# Create gstreamer elements */
# Create Pipeline element that will form a connection of other elements
print("Creating Pipeline \n ")
pipeline = Gst.Pipeline()

if not pipeline:
    sys.stderr.write(" Unable to create Pipeline \n")
print("Creating streamux \n ")

# Create nvstreammux instance to form batches from one or more sources.
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
if not streammux:
    sys.stderr.write(" Unable to create NvStreamMux \n")

pipeline.add(streammux)

output:

 python3 test.py

(python3:2651): GStreamer-WARNING **: 13:50:42.846: External plugin loader failed. This most likely means that the plugin loader helper binary was not found or could not be run. You might need to set the GST_PLUGIN_SCANNER environment variable if your setup is unusual. This should normally not be required though.

(python3:2651): GStreamer-WARNING **: 13:50:44.273: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.1: cannot open shared object file: No such file or directory
Creating Pipeline

Creating streamux

I did not found R535.104.12
image

Which model of gpu are you using? I can’t reproduce the problem using T4 & L4.

Try the following script.

/opt/nvidia/deepstream/deepstream/install.sh

I’m using an RTX 4090. The issue only occurs on the RTX 4090 with DS 6.4. Same test on same host with RTX 4090 and DS 6.3 works.

I have tested it (DS 6.4) on L4, and it works fine.

<snippet>
Installing collected packages: pgi, pycairo, pyds
Successfully installed pgi-0.0.11.2 pycairo-1.25.1 pyds-1.1.10
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

[notice] A new release of pip is available: 23.3.1 -> 23.3.2
[notice] To update, run: python3 -m pip install --upgrade pip
root@prod:/opt/nvidia/deepstream/deepstream-6.4# vi test.py
root@prod:/opt/nvidia/deepstream/deepstream-6.4# python3 test.py

(gst-plugin-scanner:2704): GStreamer-WARNING **: 15:41:44.187: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.1: cannot open shared object file: No such file or directory
Creating Pipeline

Creating streamux



root@prod:/opt/nvidia/deepstream/deepstream-6.4# nvidia-smi
Mon Jan  8 15:41:47 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA L4                      On  | 00000000:17:00.0 Off |                    0 |
| N/A   64C    P0              70W /  72W |   6127MiB / 23034MiB |     79%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA L4                      On  | 00000000:4B:00.0 Off |                    0 |
| N/A   64C    P0              67W /  72W |   4783MiB / 23034MiB |     73%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   2  NVIDIA L4                      On  | 00000000:98:00.0 Off |                    0 |
| N/A   61C    P0              67W /  72W |   5212MiB / 23034MiB |     68%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
+---------------------------------------------------------------------------------------+

I always build Docker images using this procedure: GitHub - NVIDIA-AI-IOT/deepstream_dockers: A project demonstrating how to make DeepStream docker images., which was actually my initial test. I tried installing it manually and using pre-built Docker images from the Docker NGC, but both end up encountering the same issue, which is the plugin freezing. If you can test it on an RTX 4090, you will face the same problem.

I have the exact same problem on RTX 2080 in DS 6.4, pyds 1.1.10, driver version 535.146.02. My observations:

  • It’s not specific to any element, just the first Gst.ElementFactory.make of ANY element.
  • Removing import pyds makes the problem go away

So maybe it’s a problem with the recent pyds release.

1 Like

This thread with what seems to be the same problem has two temporary workarounds that work for me: Deepstream 6.4 problem with gst registry cache

1 Like

I tested on 2080 and 2080TI and there was no problem. This shouldn’t be the same problem.

I’ll try to find a 4090.
Can you try updating nvidia-container-toolkit?
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

nvidia-container-toolkit is already the newest version (1.13.5-1).

I don’t know why. There is currently a shortage of 4090.

I have tried it on 2080/2080ti/3090 and have not encountered any problems.

What is the OS of your host? Can it be reinstalled to ubuntu22.04?

Yes. You are right. Removing import pyds makes the problem go away.
So the issue is related to pyds (1.1.10)

I did some tests:

The code works.

import gi
gi.require_version("Gst", "1.0")
gi.require_version("GstRtspServer", "1.0")
from gi.repository import Gst, GstRtspServer, GLib,GstRtsp
Gst.init(None)
pipeline = Gst.Pipeline()
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")

This code get stuck on streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")

import pyds
import gi
gi.require_version("Gst", "1.0")
gi.require_version("GstRtspServer", "1.0")
from gi.repository import Gst, GstRtspServer, GLib,GstRtsp
Gst.init(None)
pipeline = Gst.Pipeline()
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")

This code works… importing pyds after streamux:

import gi
gi.require_version("Gst", "1.0")
gi.require_version("GstRtspServer", "1.0")
from gi.repository import Gst, GstRtspServer, GLib,GstRtsp
Gst.init(None)
pipeline = Gst.Pipeline()
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
import pyds

RTX 4090 - have Ubuntu 20.04.6 LTS (Failed)
NVIDIA L4 - have Ubuntu 22.04.3 LTS (Works)

It’s not possible upgrade Ubuntu 20.04.6 LTS to 22.04 on RTX 4090, right now.

The issue is related to pyds as mentioned on previous post.

I successfully upgraded Ubuntu from version 20.04 LTS to 22.04 LTS, and the issue has disappeared. It appears that the PYDS library (DeepStreamSDK python bindings) has an incompatibility when the host is running Ubuntu 20.04 LTS and the Docker image is based on Ubuntu 22.04.

When using Ubuntu 20.04 on the host and a Docker image with Ubuntu 22.04:

Issue observed: Importing the PYDS library before calling the plugins causes the process to freeze. Additionally, the following message is displayed:

(python3:2651): GStreamer-WARNING **: 13:50:42.846: External plugin loader failed. 
This most likely means that the plugin loader helper binary was not found or could not be run. 
You might need to set the GST_PLUGIN_SCANNER environment variable if your setup is unusual. 
This should normally not be required though.

On the other hand, when using Ubuntu 22.04 both on the host and in the Docker image, the plugins work normally, and the aforementioned warning message disappears.

It seems that the compatibility issue is specifically related to the combination of Ubuntu 20.04 on the host and a Docker image based on Ubuntu 22.04. When both the host and Docker image are on Ubuntu 22.04, everything functions as expected.

1 Like

In fact, I tried it on ubuntu 18.04 (L4 + DS-6.4 docker) and ubuntu 20.04 (2080 + DS-6.4 docker) and did not encounter any problems.

It’s unfortunate that you couldn’t replicate the issue.
We need to identify a similar point for the problem to occur. Since I’ve upgrade Ubuntu, I’m unable to perform tests to reproduce the issue. It seems to be a very specific issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.