Can't find a way to run different secondary models on different secondary streams

I am running a pipeline on multiple streams. I need to run a certain secondary model on some of the streams, and another secondary model on the remains part of the streams.
So far I tried two approach.

** First approach **
First approach, described here Python3 core dumped
seems to crash randomly after a while resulting in a core dumped. The approach is based on manually change the pgie of each object using a probe. That means editing obj_meta.unique_component_id. By changing the pgie I can enable/disable the secondary models. This works but result in core dumped error.
I’d like to use this approach because it is rather simple. However, since at the moment I couldn’t find any solution, I am trying the second approach.

** Second approach **
The second approach was proposed to me by Nvidia here Run secondary models only on certain streams - #7 by kesong
It consists in using multiple nvstreammux and nvstreamdemux elements to demux and remux sources and connect only the appropriate sources to the desired models.
I tried this approach. But every time I connect pads from a nvstreamdemux to a nvstreammux element, I get the error:

MainThread 2022-07-13 17:26:33,516 - pipeline.bus_call - ERROR - Bus call: Error: gst-stream-error-quark: NvBufSurfTransform failed with error -3 while converting buffer (1): gstnvinfer.cpp(1376): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline0/GstNvInfer:vehicles-make-nvinference-engine

I am wondering if it is even possible to connect output of nvstreamdemux element to nvstreammux element. According to this post Stream muxing after demuxing - #9 by Amycao it is not possible. I wanted to double check with you if it is possible or not.
The pipeline I built seems correct (see attached pdf). However, it reports the error above.
The pipeline attached has many pads (to support 64 streams) but only 5 streams connected. That’s why you’ll see a lot of pads.
pipeline.pdf (75.3 KB)

I am willing to try other approaches. As of now I am really confused as to why the first approach does not work.

you can use nvstreamdemux + nvstreammux to separate sources, please refer to deepstream sample to check how to use it. here is plugin introduction: Gst-nvstreammux — DeepStream 6.1.1 Release documentation

If I have a nvstreammux -> nvinfer -> nvstreamdemux -> nvstreammux will the last nvstreammux works correctly? Because the last nvstreammux will receive data which already contains metadata. If I am not wrong, the data coming out of a nvstreamdemux is a NvDsBatchMeta with batch-size=1, so it already has deepstream metadata. Will the metadata be preserved?
Because when I do as you suggested I have the error reported above.

please refer to the following command, the output still has object line, so the meta will saved in the second nvstreammux.
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams//sample_720p.jpg ! jpegdec ! nvvideoconvert ! ‘video/x-raw,format=I420’ ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=NV12’ ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt ! nvstreamdemux name=demux demux.src_0 ! nvvideoconvert ! ‘video/x-raw,format=I420’ ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=NV12’ ! mux1.sink_0 nvstreammux name=mux1 batch-size=1 width=1920 height=1080 ! nvstreamdemux name=demux1 demux1.src_0 ! “video/x-raw(memory:NVMM), format=NV12” ! queue ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=RGBA’ ! nvdsosd ! nvvideoconvert ! ‘video/x-raw,format=I420’ ! jpegenc ! filesink location=8.jpeg

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.