Multiple instances of nvdsosd interfere with each other

Hello everyone,

I’m trying to use multiple models on the same source and produce different outputs. But using multiple instances of nvdsosd with different settings causes them to interfere with each other.

Below is a minimal working example of the bug.
I tried using the tee at different points in the pipeline, but it is always the same result.

I would be thankful for any help.


• Hardware Platform (Jetson / GPU)
GPU NVIDIA GeForce GTX 1650
• Docker Image
deepstream:6.3-gc-triton-devel 6.3

• NVIDIA GPU Driver Version (valid for GPU only)
545.29.02
• Issue Type( questions, new requirements, bugs)
Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! mux.sink_0 \
 nvstreammux name=mux batch-size=1 width=1920 height=1080 ! tee name=src_tee \
 src_tee. ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt \
 ! nvvideoconvert  ! nvdsosd display-text=1 display-clock=1 display-bbox=0 ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=out0.mp4 \
 src_tee. ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt \
 ! nvvideoconvert  ! nvdsosd display-text=0 display-clock=0 display-bbox=1 ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=out1.mp4

Edit: Typo

Hi @janhuennemeyer

In your pipeline, you are sharing a single DeepStream meta for both paths. To avoid race conditions over the metadata, you should add an nvstreammux to each path. Therefore, the tee element should be placed before nvstreammux.

Do you want parallel inference ? There is a sample provide by DeepStream.

Thank you @miguel.taylor ! I tried to put the tee everywhere but there.

Here is what the solution for my minimal example would look like:

gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! tee name=src_tee \
src_tee.src_0 ! mux_0.sink_0 \
src_tee.src_1 ! mux_1.sink_0 \
 nvstreammux name=mux_0 batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt \
 ! nvvideoconvert  ! nvdsosd process-mode=1 display-text=1 display-clock=1 display-bbox=0 ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=out0.mp4 \
 nvstreammux name=mux_1 batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt \
 ! nvvideoconvert  ! nvdsosd process-mode=1 display-text=0 display-clock=0 display-bbox=1 ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=out1.mp4

@junshengy Yes I want parallel inference, but it’s actually two separate pipelines with the same source. No need to combine the results at the end, I just didn’t want to decode the source twice.

2 Likes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.