Nvstreammux stream error when pipelines are restarted

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU Tesla-T4
• DeepStream Version 5.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version The latest version with Deeepstream sdk 5.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 460.39
• Issue Type( questions, new requirements, bugs) bugs

Hello, I’m using Gst-Interpipes to dynamically change if one of the sources is down.
My pipeline looks like this
Source Pipeline - uridecodebin -->watchdog → interpipesink
Dummy Pipeline - videotestsrc → interpipesink
Main Pipeline - interpipesrc → nvstreammux → fakesink
i have a probe attached to fakesink, which fetches the FPS.

Watchdog element throws an error to the bus, when no buffers are received and in the bus call function, i change the interpiepsrc property to listen to the interpipesink present in the dummy pipeline. Meanwhile, i restart the Source pipeline( Set it to NULL and then PLAYING) and then make the interpipesrc present in main pipeline listen to the source pipeline.
When i switch back to the source pipeline after restarting, nvstreammux throws the following error.

    [E 210313 05:28:34 gst_utils:98] gst-stream-error-quark: Input buffer number of surfaces (-336860181) must be equal to mux->num_surfaces_per_frame (1)
            Set nvstreammux property num-surfaces-per-frame appropriately
         (1):- gstnvstreammux.c(354): gst_nvstreammux_chain (): /GstPipeline:main-pipeline/GstNvStreamMux:nvstreammux0 

I looked up https://forums.developer.nvidia.com/t/deepstream-4-0-nvmux-input-buffer-number-of-surfaces-336860181-must-be-equal-to-mux-num-surfaces-per-frame/78670, but they’re using nvarguscamera and I’m using uridecodebin.

The pipeline works as intended and prints the FPS otherwise.

These are the properties set for the following elements
My input is an RTMP source
interpipesrc - is-live, True.
emit-signals, True
interpipesink - forward-eos,True
sync, True
forward-events, True
nvstreammux - batched-push-timeout, 80000000
live-source, 1
nvbuf-memory-type - int(pyds.NVBUF_MEM_CUDA_UNIFIED)
batch-size, num_sources
width, 1024
height, 720

We have sample for removing/adding the rtsp sources when pipeline is in PLAYING state. Please refer to deepstream_reference_apps/runtime_source_add_delete at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub

1 Like

Sure, this was a problem handling sources when camera streams stopped abruptly, that don’t send as EOS event. I’ll try working around that and let you know if this worked. Thank you for the quick reply