Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
both • DeepStream Version
7.0 • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs)
Questions.
We’ve developed an implementation of Parallel Inference that is based on your example that seems to work well except for two use cases.
Looping file sources - once one of the streams loops the frame-rate for that stream drops from 30fps to ~ 1fps. After they all loop the Pipeline freezes.
Dynamically adding and removing sources - can’t seem to get this to work with either the new or old Streammuxer. I can start the Pipeline with max-sources and remove sources successfully. Adding sources back always fails. If I try to start the Pipeline with less than max-source (batch-size) the Pipeline fails to play.
My question is, has the Metamuxer been tested under these scenarios? Are these use cases known to be supported?
I’m trying to determine if I need to keep debugging our implementation, or if I should spend the time updating your example to see if it works there.
Seems you have customized the app and added some new features.
How did you implement the “Looping file sources”?
There are several nvstreammux and nvstreademux in the pipeline, have you adapted to all nvstreammux and nvstreamdemux for your dynamic source removing and adding case?
Our implementation is pretty much identical to the uri-decode-source in deepstream_source_bin.c. I’m referring to the restart_stream_buf_prob
/**
* Probe function to drop certain events to support custom
* logic of looping of each source stream.
*/
static GstPadProbeReturn
restart_stream_buf_prob (GstPad * pad, GstPadProbeInfo * info, gpointer u_data)
{
GstEvent *event = GST_EVENT (info->data);
NvDsSrcBin *bin = (NvDsSrcBin *) u_data;
if ((info->type & GST_PAD_PROBE_TYPE_BUFFER)) {
GST_BUFFER_PTS (GST_BUFFER (info->data)) += bin->prev_accumulated_base;
}
if ((info->type & GST_PAD_PROBE_TYPE_EVENT_BOTH)) {
if (GST_EVENT_TYPE (event) == GST_EVENT_EOS) {
g_timeout_add (1, seek_decode, bin);
}
if (GST_EVENT_TYPE (event) == GST_EVENT_SEGMENT) {
GstSegment *segment;
gst_event_parse_segment (event, (const GstSegment **) &segment);
segment->base = bin->accumulated_base;
bin->prev_accumulated_base = bin->accumulated_base;
bin->accumulated_base += segment->stop;
}
switch (GST_EVENT_TYPE (event)) {
case GST_EVENT_EOS:
/* QOS events from downstream sink elements cause decoder to drop
* frames after looping the file since the timestamps reset to 0.
* We should drop the QOS events since we have custom logic for
* looping individual sources. */
case GST_EVENT_QOS:
case GST_EVENT_SEGMENT:
case GST_EVENT_FLUSH_START:
case GST_EVENT_FLUSH_STOP:
return GST_PAD_PROBE_DROP;
default:
break;
}
}
return GST_PAD_PROBE_OK;
}
I believe so, at least for the basic case where all parallel branches link to all streams. Are there any plans to update the Parallel Inference example to support dynamic updates?
if the parallel pipeline support adding and removing sources dynamiclly, it only load the same yolo model one time . IF not support ,i have to use multiple pipelines to support adding more than one source running on the same yolo model dynamiclly . Then every pipeline have to load the same yolo model which i think it is a waste of recources
No. I mean you can use separated pipelines to replace parallel pipeline. Then you can add/remove sources with the separated pipelines. The parallel pipeline does not support dynamic adding/removing sources.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks