Nvstreammux handled EOS after dynamically add source

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): 3090
• DeepStream Version: 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version: 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only): 525.89.02
• Issue Type( questions, new requirements, bugs): questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I refer to deepstream_test_rt_src_add_del.c to build runtime source re-add application. The whole application want to delete and re-add the same one file source by deleting and re-constructing different uri-decode-bin sources.

The deletion is working, but after the new uri-decode-bin is re-added, the applicatoin only haddles the first frame, and shows

nvstreammux: Successfully handled EOS for source_id=0

then it hangs up, but never sending stream-eos message from nvstreammux to the bus.

The flow of uri-decode-bin is

uri-decode-bin->(source,decodebin0->(typefind,qtdemux0,multiqueue0,h264parse0,capsfilter0,nvv4l2decoder0))

(…) is the beginning and the end of child elements.

The flow of the whole pipeline is

uri-decode-bin ! sink_0 nvstreammux ! nvinfer ! nvtracker ! nvstreamdemux src_0 ! nvvideoconvert ! nvdsosd ! fakesink

The following is the log and outpus of infer and tracker elements are omitted.
1.log (8.8 KB)

The logic of deletion and reconnection is when the GST_MESSAGE_EOS is received in bus_call, the reconnect function added from g_idle_add is invoked. In the reconnect function, sink_0 of nvstreammux is unreferenced and uri-decode-bin source is removed, the new uri-decode-bin source is added with the same fileSrc, relink to sink_0 of nvstreammux and set its state to GST_STATE_PLAYING.

Can you provide sample code that reproduces the problem?
It is hard to tell where the problem lies just from the description above.

I have changed the deepstream_test_rt_src_add_del.c to reproduce the problem.
source:
deepstream_test_rt_src_add_del_m.txt (24.5 KB)
change the suffix .txt to .c to run
video:


run command:

./deepstream-test-rt-src-add-del-m file:///workspace/code-update/deepstream_reference_apps/runtime_source_add_delete/sample_1080p_h265_cut_5s.mp4 0 filesink 1


I compared the code and I don’t understand why such a change was made.

This is the cause of the problem. Do you want to add sources dynamically? You can try nvmultiurisrcbin.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvmultiurisrcbin.html

Can you share your usage scenario?

Hi, junshengy, Thank you for your attention!

I want to figure out why that change cause this weird problem.

The usage scenario is that we want to feed many local videos to the pipeline. The pipeline supports multiple parallel BATCH_SIZE uri-decode-bins for realtime processing based on our model’s capabilities. When any of the previous ones are finished, we can change that uri to another one.

nvmultiurisrcbin seems to be intended for http request and may be considered in the future.

Interesting thing observed is that when I use nvurisrcbin instead of uri-decode-bin, and set the rtsp-reconnect-interval property,if the next uri is rtsp source, the pipeline automatically run after a period of time after the previous local video ends.

When switching source bins, the pipeline status is abnormal. This problem is related to the gstreamer pipeline implementation, not a DeepStream problem.

runtime_source_add_delete should be able to match your requriments.

I simply change the deepstream_test_rt_src_add_del.c and keep the timeout-add method. The run_forever way works for my purpose, but it only works in deepstream 7, failed in deepstream 6.2. The second loop in deepstream 6.2 throw the error

ERROR from element qtdemux3: Internal data stream error.
Error details: qtdemux.c(6619): gst_qtdemux_loop (): /GstPipeline:dstest-pipeline/GstURIDecodeBin:source-bin-01/GstDecodeBin:decodebin3/GstQTDemux:qtdemux3:
streaming stopped, reason error (-5)

In the sample code, I use MAX_NUM_SOURCES=2. Does the run_forever way require deepstream 7?
The changed code
deepstream_test_rt_src_add_del_2.txt (24.6 KB)
run commad, using original video:

./deepstream-test-rt-src-add-del-2 file:///workspace/code-update/deepstream_reference_apps/runtime_source_add_delete/sample_1080p_h265.mp4 1 filesink 1

1.Both DS-6.2 and DS-7.0 support this feature.

Please do not modify here. When the stream quit or EOS, Then nvstreammux needs to delete the corresponding sink pad.
This will destroy the state of the pipeline.

I run the original sample and it works well for run_forever.

However, it often happens that the code will forcefully delete the source without waiting for the EOS of the source(the video isn’t over yet). I comment out these lines in the delete_source to delay the deletion only to the time when g_eos_list[source_id] = TRUE :

  // do
  // {
  //   source_id = rand() % MAX_NUM_SOURCES;
  // } while (!g_source_enabled[source_id]);
  // g_source_enabled[source_id] = FALSE;
  // g_print("Calling Stop %d \n", source_id);
  // stop_release_source(source_id);

  // if (g_num_sources == 0)
  // {
  //   if (g_run_forever == FALSE)
  //   {
  //     g_main_loop_quit(loop);
  //     g_print("All sources Stopped quitting\n");
  //   }
  //   else
  //   {
  //     g_timeout_add_seconds(5, add_sources, (gpointer)g_source_bin_list);
  //   }
  //   return FALSE;
  // }

After this change, I find that the last finished video can’t produce theGot EOS from stream message and the timeout deletion hangs up. But the message nvstreammux: Successfully handled EOS for source_id is produced for all sources.

Similiar to No EOS signal in the second video in runtime add or delete source

This is a feature of the sample, explained in the README.

At runtime after a timeout a source will be added periodically. All the components are reconfigured during addition/deletion
After reaching of MAX_NUM_SOURCES, each source is deleted periodically till single source is present in the pipeline

You need to modify the add_sources / delete_sources according to your requirements not just g_timeout_add_seconds

if (g_num_sources == MAX_NUM_SOURCES) {
    /* We have reached MAX_NUM_SOURCES to be added, no stop calling this function
     * and enable calling delete sources
     */
    g_timeout_add_seconds (5, delete_sources, (gpointer) g_source_bin_list);
    return FALSE;
  }

Receive EOS from source with stream id.

case GST_MESSAGE_ELEMENT:
    {
      if (gst_nvmessage_is_stream_eos (msg)) {
        guint stream_id;
        if (gst_nvmessage_parse_stream_eos (msg, &stream_id)) {
          g_print ("Got EOS from stream %d\n", stream_id);
          g_mutex_lock (&eos_lock);
          g_eos_list[stream_id] = TRUE;
          g_mutex_unlock (&eos_lock);
        }
      }
      break;
    }

I find the reason for the unreceived message of the last finished video. The core is the sink. When I change the sink type to fakesink in the orginal code, Got EOS from stream is expectedly reached.