• Hardware Platform (Jetson / GPU) DGPU Nvidia RTX5000
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version TensorRT 8.6.1.6-1 Cuda 12.2
• NVIDIA GPU Driver Version (valid for GPU only) 535.183.01
• Issue Type( questions, new requirements, bugs) Questions
I have created a parallel inferencing pipeline using the Nvidia example here as a reference. See attachment for pipeline graph. This pipeline uses two sources and 4 parallel models based on the example above. I am able to see my classification models output after the nvdsmetamux when I map all models to Source 0, but mapping 2 models to Source 0 and 2 models to Source 1 only shows the metadata for Source 0 after the nvdsmetamux. I need to access all 4 classification results after the nvdsmetamux, but I currently only see those from the models mapped to source 0. Do you have any thoughts as to why I only see the model output from the models assigned to Source 0?
I have attached the pipeline graph, an annotated image showing where I am probing for the classifier_meta_list, and the nvdsmetamux config file.
Things I have tried:
-
Assign the 2 models for Source 1 to Source 0
- Result: All 4 model outputs are available in the probe after the nvdsmetamux
-
Remove the 2 models for Source 0, leaving only the models for Source 1
- Result: No model output is available in the probe after the nvdsmetamux
-
In the nvdsmetamux config, change the active-pad to the sinks (sink_3 or sink_4) for one of the Source 1 models
- Result: The associated Source 1 model’s output alone is piped through the nvdsmetamux
-
In the nvdsmetamux config, increase the pts-tolerance to something extremely high
- Result: No change
Any recommendations are appreciated! Thanks
config_metamux0.txt (1.7 KB)