Hi all.
In the sample https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/runtime_source_add_delete,you show a method to add source with nvstreammux and nvmultistreamtiler dynamically.
Now we want to implement a similar functions with nvstreammux and nvstreamdemux, when we try to get the pad of nvstreamdemux in the function ‘static gboolean add_sources (gpointer data)’, we get a error message ‘Failed to get the request pad. Exiting’.
gchar pad_name[16] = { 0 };
GstPad *sinkpad = NULL;
g_snprintf (pad_name, 15, "src_%u", source_id);
sinkpad = gst_element_get_request_pad (nvstreamdemux, pad_name);
if (sinkpad == NULL) {
g_printerr ("Failed to get the request pad. Exiting\n");
return;
}
gst_object_unref (sinkpad);
0:00:10.618554432 9287 0x5599ae9f30 WARN nvstreamdemux gstnvstreamdemux.c:69:gst_nvstreamdemux_request_new_pad:<nvstreamdemux> New pad can only be requested in NULL state
Failed to get the request pad. Exiting
Is that means we can’t add sinks to nvstreamdemux dynamically ?
Maybe I am misinterpreting your question @2251582984 but we can dynamically add elements to the streamdemux element. I do it in my programs and so do Nvidia in the deepstream-app sample application.
Hi,jasonpgf2a.
When I create a pipeline like “uridecodebin uri=file:///input_1.mp4 ! nvstreammux ! nvinfer ! nvstreamdemux ! queue ! nvvideoconvert ! nvdsosd ! … ! filesink location=output1.mp4”, and then set it to play “gst_element_set_state(pipeline,GST_STATE_PLAYING)”,there is no problem.
But when the pipeline is playing, I want to add source bin "uridecodebin uri=file:///input_2.mp4 " and sink bin “queue ! nvvideoconvert ! nvdsosd ! … ! filesink location=output2.mp4” to the pipeline,I got the error message.
gchar pad_name[16] = { 0 };
GstPad *sinkpad = NULL;
g_snprintf (pad_name, 15, "src_%u", source_id);
sinkpad = gst_element_get_request_pad (nvstreamdemux, pad_name);
if (sinkpad == NULL) {
g_printerr ("Failed to get the request pad. Exiting\n");
return;
}
gst_object_unref (sinkpad)
0:00:10.618554432 9287 0x5599ae9f30 WARN nvstreamdemux gstnvstreamdemux.c:69:gst_nvstreamdemux_request_new_pad:<nvstreamdemux> New pad can only be requested in NULL state
Failed to get the request pad. Exiting
By the way, if I create a pipeline like “uridecodebin uri=file:///input_1.mp4 ! nvstreammux ! nvinfer ! nvmultistreamtiler ! queue ! nvvideoconvert ! nvdsosd ! … ! filesink location=output1.mp4”, set it to play “gst_element_set_state(pipeline,GST_STATE_PLAYING)”, and then add source bin "uridecodebin uri=file:///input_2.mp4 " to the pipeline, there is no problem.
Hmm… could you restart the program when you need to add another source.? That’s what I’m doing anyway - if I need to add a source after playing has started.
I then have a tee with a fakesink as my sink bin. On a pgie detection (ie person found) I then add a new set of elements to the tee - which does allow you to dynamically add/remove src pads at runtime.
It’s a shame that the nvstreamdemux doco does not explicitly state it cannot handle adding removing sources from the playing state.
I found a pretty simple workaround to this limitation!
Since the nvstreamdemux element only allows pads to be added in the NULL state, you can add as many pads as you need before starting the pipeline. Then while the pipeline is playing, you can link and unlink these existing pads instead of adding and removing them.
Here is this idea, in snip-its of code:
Decide on the maximum number of outputs you expect to need out of nvstreamdemux:
#define MAX_NUM_OUTPUTS 8
Add pads to the nvstreamdemux element when constructing the original pipeline:
// make the demux element
demux = gst_element_factory_make ("nvstreamdemux", "stream-demuxer");
// at some point BEFORE PLAYING, add all the output pads to demux
for (guint i=0; i<MAX_NUM_OUTPUTS; i++)
{
gchar demux_src_pad_name[8];
g_snprintf (demux_src_pad_name, 7, "src_%u", i);
GstPad *demux_src_pad = gst_element_get_request_pad (demux, demux_src_pad_name);
gst_object_unref (demux_src_pad);
}
Any time you need another output from your nvstreamdemux element, you will need to link the demux output (“source”) pad to the receiving element’s input (“sink”) pad:
// first prepare your sink element
// in my pipeline, after demux, I have a bin with:
// queue | nvvideoconvert | nvdsosd
GstElement *sink_element = your_sink_element;
// decide which output stream you want
guint index = 3; // suppose you want the fourth output stream
// get the sink pad from sink_element
// in my code demux links to a queue
// since queue takes one input, it has a static pad simply named "sink"
GstPad *sink_pad = gst_element_get_static_pad (sink_element, "sink");
// get the source pad from demux
gchar demux_src_pad_name[8];
g_snprintf (demux_src_pad_name, 7, "src_%u", index);
GstPad *demux_src_pad = gst_element_get_static_pad (demux, demux_src_pad_name);
// link the pads
if (gst_pad_link (demux_src_pad, sink_pad) != GST_PAD_LINK_OK) {
g_printerr ("Failed to link sink_element to demux!\n");
// handle errors in any way you see fit (maybe exit)
}
// don't forget to unref your pad pointers
gst_object_unref (demux_src_pad);
gst_object_unref (sink_pad);
Any time you need to remove an output from your nvstreamdemux element, you will need to unlink the demux source pad from the receiving element’s sink pad:
// stopping sink_element by changing the state is normal and is not part of the workaround
// after you have stopped the source that feeds into the nvstreammux
// now you can stop the sink that is fed out of nvstreamdemux, using the same method:
GstStateChangeReturn state_change_return =
gst_element_set_state (sink_element, GST_STATE_NULL);
switch(state_change_return) {
case GST_STATE_CHANGE_ASYNC:
// wait for the state change to complete
gst_element_get_state (sink_element, NULL, NULL, GST_CLOCK_TIME_NONE);
case GST_STATE_CHANGE_SUCCESS:
// DO NOT DO THIS (do not remove demux's source pad):
// gst_element_release_request_pad (linkedElement, linkedPad);
// WORKAROUND:
// instead leave demux's source pad and unlink it from sink_element's sink pad
// first get the demux source pad using its index
gchar demux_src_pad_name[8];
g_snprintf (demux_src_pad_name, 7, "src_%u", index);
GstPad *demux_src_pad = gst_element_get_static_pad (demux, demux_src_pad_name);
// then get sink_element's sink pad
// here this is done by getting the peer of demux's source pad
GstPad *sink_pad = gst_pad_get_peer (demux_src_pad);
// now unlink the pads
gst_pad_unlink (demux_src_pad, sink_pad);
// and don't forget to unref the pads
gst_object_unref (sink_pad);
gst_object_unref (demux_src_pad);
// END OF WORKAROUND
// remove sink_element
gst_bin_remove (GST_BIN (pipeline), sink_element);
gst_object_unref (sink_element);
break;
case GST_STATE_CHANGE_FAILURE:
g_printerr ("STATE CHANGE FAILURE\n");
break;
case GST_STATE_CHANGE_NO_PREROLL:
g_print ("STATE CHANGE NO PREROLL\n");
break;
default:
break;
}
I hope anyone who needs to dynamically change the outputs from nvstreamdemux finds this workaround code usable. Feel free to use this code for your purposes.
If anyone (Nvidia people?) see some error in this code, or have any other information about this (feature addition timeline?), please let me know.
Credit where credit is due: this solution to the dynamic changes in feeds with demux problem is laid out by another user in this comment on another post:
Hi everyone,
Thanks for above-suggested methods, helped a lot.
I have two more cases where I am getting stuck.
1st Case:
After the demux element I wanted to mux the first 5 streams together have a pipeline1 for them and mux the later streams and have them follow pipeline2. So for example:
→ mux1 → pipeline1
src → nvstreammux → nvinfer → nvstreamdemux
→ mux2 → pipeline2
In my case, pipeline1 is: nvtiler → nvvidconvert → nvosd → video-render
and pipeline2 is: nvtiler → nvvidconvert → nvosd → nvvidconvert → videoconvert → x264enc → rtp264pay → udpsink
But when run the pipeline I get segmentation fault(core dumped)
2nd Case:
When I don’t use the mux after demux element but for different streams, I have different pipelines, for example for stream1 after demux I have: → queue → nvvidconvert → nvosd → nvvidconvert → videoconvert → x264enc → rtp264pay → udpsink
and for stream2 I have: → queue → fakesink
In this case, the pipeline runs for a few frames and then comes to a halt. This does not happen when I have the same pipeline for both the streams.
Does the pipeline need to be same after the demux for all the elements?
Is there a better way to do this?