Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU • DeepStream Version 7.0.0 • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) Bug • How to reproduce the issue ?
Here is the Pipeline Graph.
I am trying to create a parallel inference app (as shown in the graph). It correctly displays my 4 output streams, but does not show the inference on streams 3 and 4 (basically branch 2).
Can someone please explain where the issue is, is it coming from the metamux or other components?
My metamux config:
[property]
enable=1
# sink pad name which data will be pass to src pad.
active-pad=sink_0
# default pts-tolerance is 60 ms.
pts-tolerance=60000
[user-configs]
[group-0]
# src-ids-model-<model unique ID>=<source ids>
# mux all source if don't set it.
# src-ids-model-10=0
# src-ids-model-1=2;3
One thing to also note is that I am using the NEW NvStreamMux since the old one was giving me a hard time.
The batch size is set in the code based on the number of source streams (if you are wondering why it is not set in the config).
@Fiona.Chen
Here is the output when running the sample with the default configs (source4_1080p_dec_parallel_infer.yml with the sources_4_different_source.csv)
@Fiona.Chen changed the metamux config to the following (commented out the src-ids-model, just like in my config)
[property]
enable=1
# sink pad name which data will be pass to src pad.
active-pad=sink_0
# default pts-tolerance is 60 ms.
pts-tolerance=60000
[user-configs]
[group-0]
# src-ids-model-<model unique ID>=<source ids>
# mux all source if don't set it.
# src-ids-model-1=0;1
# src-ids-model-2=1;2
# src-ids-model-3=1;2`
@Fiona.Chen
My app is written in Python, and it follows the same pipeline configuration as the sample.
Is it possible that this does not work in Python? I cannot think of any other issue.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks