How to request pads to link tee in nvidia deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU (Ubuntu 20.04, 1080 GTX)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
Can’t link tee pads in C++
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Attempt to link a pipeline including a tee using gst_element_request_pad(), gst_pad_link().

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

This seems like a trivial thing to accomplish, but my real issue is with finding the correct documentation. As stated above, I’ve been trying to use gst_element_request_pad() and gst_pad_link() to link a tee element to pads in respective forks. However, official gstreamer documentation moved on quite a bit ago from 1.16, which is the version deep stream has been developed with. Official gstreamer documentation gives examples to link pads between gstreamer elements at this link stating developers should use gst_element_request_pad_simple(). This function doesn’t seem to exist within deepstream, as it is part of gstreamer 1.20.

With this documentation conflict in mind, would someone be so kind as to spell out for me how to accomplish the following valid, working gstreamer pipeline string in C++?

gst-launch-1.0 -v v4l2src device=/dev/video2 ! video/x-h264,framerate=30/1,width=1920,height=1080 ! tee name=t t. ! queue ! h264parse ! nvv4l2decoder ! queue ! nveglglessink sync=0 t. ! queue ! rtph264pay pt=96 ! udpsink host= port=7331 sync=0

I can link and execute both forks of that tee individually in C++ no problem. Linking the tee to the second fork is when it dies, I get a black display screen and no UDP stream at the distant end. Curiously, I don’t see any errors thrown, though.

Here is the code linking relevant elements:

        gst_bin_add_many(GST_BIN(data->bin), data->source, data->sourceFilter, data->tee, data->parse, data->decode, data->imageSink, data->payload, data->udpSink, NULL);
        gst_element_link_many(data->source, data->sourceFilter, data->tee, NULL);
        gst_element_link_many(data->parse, data->decode, data->imageSink, NULL);
        gst_element_link_many(data->payload, data->udpSink, NULL);
        gst_element_link(data->tee, data->parse);
        gst_element_link(data->tee, data->payload);

It is when I try to link the tee that the pipeline breaks. i think this is because I’m not calling out specifically enough which pads should be connected between elements, but like I said I can’t figure it out just by reading documentation in this case.

Any help is appreciated, thanks in advance.

Hi @Sealfoss

Ran into these problems a few months ago. I ended up using lots of queue elements and also using debug features in GStreamer to draw the pipeline graph. this let me see visually where I was misconnecting elements (TEEs are difficult to keep straight in your mind). Please see link here:

1 Like

You can just referring to our demo code: opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/deepstream_test4_app.c

Solved problem by creating GstPad pointers:

GstPad *displaySrc, *udpSrc, *displaySink, *udpSink;

Then, getting addresses to relevant static sink pads on the queues and request source pads from the tee:

displaySink = gst_element_get_static_pad(data->queueA, "sink");
displaySrc = gst_element_get_request_pad(data->tee, "src_%u");
udpSink = gst_element_get_static_pad(data->queueB, "sink");
udpSrc = gst_element_get_request_pad(data->tee, "src_%u");

FInally, linking the pads manually and unrefing the pointers finished the job:

gst_pad_link (displaySrc, displaySink);
gst_pad_link (udpSrc, udpSink);

You’re welcome. Glad you got it fixed.



Iain Attwater
President, Neuron Data LLC
+1 (912) 223-9057

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.