Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
- GPU
• DeepStream Version
- 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
- 535.129.03
• Issue Type( questions, new requirements, bugs)
The code below is currently using tee in deepstream in python.
The problem is that pgie and pgie2 use different models, and if you proceed as shown below, the stream to which the tiler is applied normally outputs results using pgie’s model.
In the second output result, there is a problem in that not only the result of pgie2 but also the inference result of pgie is output together.
I would appreciate it if you could let me know what the problem is.
streammux.link(queue1)
queue1.link(streamdemux)
streamdemux.link(tee)
# queue1.link(tee)
tee1_pad = tee.get_request_pad('src_%u')
if not tee1_pad:
sys.stderr.write("Unable to get request pads\n")
sink_pad_queue2 = queue2.get_static_pad("sink")
tee1_pad.link(sink_pad_queue2)
queue2.link(pgie)
pgie.link(nvvidconv1)
nvvidconv1.link(queue3)
queue3.link(tiler1)
tiler1.link(queue4)
queue4.link(nvosd1)
nvosd1.link(queue5)
queue5.link(sink1)
# streammux.link(queue10)
# queue10.link(tee)
tee2_pad = tee.get_request_pad("src_%u")
if not tee2_pad:
sys.stderr.write("Unable to get request pads\n")
sink_pad_queue6 = queue6.get_static_pad("sink")
tee2_pad.link(sink_pad_queue6)
queue6.link(pgie2)
pgie2.link(nvvidconv2)
nvvidconv2.link(queue7)
queue7.link(nvosd2)
nvosd2.link(queue8)
queue8.link(sink2)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)