I’m trying to use multiple models on the same source and produce different outputs. But using multiple instances of nvdsosd with different settings causes them to interfere with each other.
Below is a minimal working example of the bug.
I tried using the tee at different points in the pipeline, but it is always the same result.
• NVIDIA GPU Driver Version (valid for GPU only)
545.29.02 • Issue Type( questions, new requirements, bugs)
Bug • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
In your pipeline, you are sharing a single DeepStream meta for both paths. To avoid race conditions over the metadata, you should add an nvstreammux to each path. Therefore, the tee element should be placed before nvstreammux.
@junshengy Yes I want parallel inference, but it’s actually two separate pipelines with the same source. No need to combine the results at the end, I just didn’t want to decode the source twice.