Adding a ghost pad after splitting a pipeline using Tee?

Trying to modify deepstream-app to add two detectors, adding a tee after nvvidconv and then plugging in multiple detectors to it. There is a ghost pad which is added,
NVGSTDS_BIN_ADD_GHOST_PAD (bin->bin, bin->primary_gie, “src”);
But how to add the same when there is tee involved?

Hi,
Please share block diagrams of the pipeline so that we can easily understand your usecase. It is not a case supported by default and we may check if we can include the usecase in future releases.

@DaneLLL, I’ve attached a link of a partial block diagram that I was trying to achieve, I had modified the code in a similar way, but the only problem so far has been the pads that are linked at various parts, fixing them. Apparently for a single source I got it to work but on multiple the classifiers seem to fail.

Partial block diagram

Ghost pads are only necessary when the elements are at a different hierarchy (for example, if you have Element foo inside Bin A and you want to link it to Element bar in Bin B).

The freedesktop server is down for me (really, it’s horrible), but you can check out the python docs here for Bin.

Gst.Bin is an element that can contain other Gst.Element, allowing them to be managed as a group. Pads from the child elements can be ghosted to the bin, see Gst.GhostPad. This makes the bin look like any other elements and enables creation of higher-level abstraction elements.

If Elements are at the same hierarchy you can just link them as you would normal request, always, or sometimes pads.

There is a also a gst_pad_link_maybe_ghosting function that will create ghost pads for you if hierarchies are different.

If the site is down:

boolean
gst_pad_link_maybe_ghosting (GstPad * src,
                             GstPad * sink)

Links src to sink, creating any GstGhostPad’s in between as necessary.

This is a convenience function to save having to create and add intermediate GstGhostPad’s as required for linking across GstBin boundaries.

If src or sink pads don’t have parent elements or do not share a common ancestor, the link will fail.

Parameters:

src – a GstPad
sink – a GstPad
Returns – whether the link succeeded.
Since : 1.10

Then you don’t have to ghost them and add them to a bin manually.

Also, when a tee is involved there must be queues on the source pads. To rejoin the branches I am not sure what you’ll have to use or if it’s possible since i believe you’ll have resync the streams.

My understanding is it’s possible to run secondary inference engines in async mode if they are after a tracker element so a buffer can be passed down the pipeline, allowing the next engine to do it’s thing at the same time. You can see an example of it’s use in this config here and deepstream-test2.

@mdegans, I wanted to achieve the task of having two detectors before the tracker, deepstream-app is written in such a way that it only accepts a single detector before the tracker and adding a detector after tracker doesn’t let having the object to have a tracker id and for some reason when I tried adding a detector after the tracker even though there were bounding boxes all classification results were not getting populated in the buffer. In order to have a detector is it necessary to have the tee element there? I went with that approach since classifiers were added to pipeline in that way. But could it be done any differently? As in adding sequentially without having the pipeline split? Thanks

I am not sure if it’s possible to have multiple primary inference engines like you want. I think the idea instead is that instead of two models to detect Foo and Bar respectively, you have one that detects both, then you run classifiers/whatever on the results of each.

This is a question DaneLLL would have to answer for certain. I don’t recall a DeepStream example with two pgies and have never tried it. If it did work, I suspect it would need to be in series. I know multiple classifiers isn’t a problem, however, and from what I recall in the examples they are places in series in async mode. The documentation says specifically that is only supported in secondary mode, however.

Hi,
The case we have supported and verified is
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream%20Development%20Guide%2Fdeepstream_app_architecture.html%23
It is demonstrated in

deepstream_sdk_v4.0.2_jetson\samples\configs\deepstream-app\source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

So it would be better if you can fit your case into this architecture, such as applying your models to second classifiers.

@DaneLLL, do you mean using the detector models after the tracker?

Hi,

No. Just want to show this is the complicated case we support in DeepStream SDK.
Since nvtracker only can connect to one nvinfer, your modified pipeline is not supported.

Hi,
One possible solution is to link two nvinfer in series such as:

$ gst-launch-1.0 uridecodebin uri=file:///home/nvidia/1080.mp4 ! mx.sink_0 nvstreammux width=1920 height=1080 batch-size=1 name=mx ! nvinfer config-file-path=/home/nvidia/deepstream-4.0/samples/configs/deepstream-app/config_infer_primary_nano.txt unique-id=7 ! nvinfer config-file-path=/home/nvidia/deepstream-4.0/samples/configs/deepstream-app/config_infer_primary.txt unique-id=8 ! nvvideoconvert ! nvdsosd ! nvoverlaysink

You can distinguish the result by unique-id

Thanks was able to solve it. There are two ways you could work in, one was connecting them in series before tracker or make the detector as secondary model and run it after tracker. That solved it. Thanks.