Parallel Inferencing Pipeline Cannot Access Classification Data for Secondary Sources

• Hardware Platform (Jetson / GPU) DGPU Nvidia RTX5000
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version TensorRT 8.6.1.6-1 Cuda 12.2
• NVIDIA GPU Driver Version (valid for GPU only) 535.183.01
• Issue Type( questions, new requirements, bugs) Questions

I have created a parallel inferencing pipeline using the Nvidia example here as a reference. See attachment for pipeline graph. This pipeline uses two sources and 4 parallel models based on the example above. I am able to see my classification models output after the nvdsmetamux when I map all models to Source 0, but mapping 2 models to Source 0 and 2 models to Source 1 only shows the metadata for Source 0 after the nvdsmetamux. I need to access all 4 classification results after the nvdsmetamux, but I currently only see those from the models mapped to source 0. Do you have any thoughts as to why I only see the model output from the models assigned to Source 0?

I have attached the pipeline graph, an annotated image showing where I am probing for the classifier_meta_list, and the nvdsmetamux config file.

Things I have tried:

  • Assign the 2 models for Source 1 to Source 0

    • Result: All 4 model outputs are available in the probe after the nvdsmetamux
  • Remove the 2 models for Source 0, leaving only the models for Source 1

    • Result: No model output is available in the probe after the nvdsmetamux
  • In the nvdsmetamux config, change the active-pad to the sinks (sink_3 or sink_4) for one of the Source 1 models

    • Result: The associated Source 1 model’s output alone is piped through the nvdsmetamux
  • In the nvdsmetamux config, increase the pts-tolerance to something extremely high

    • Result: No change

Any recommendations are appreciated! Thanks



config_metamux0.txt (1.7 KB)

Can you post all the configuration files? App configuration, model configuration,…

@Fiona.Chen I have attached my model configuration. This pipeline is based on the Nvidia Parallel Inferencing example, but it is written as a separate application that doesn’t use the Nvidia like application configuration files. In addition, these models are configured as SGIEs as I manually draw bounding boxes within my application, acting as the PGIE. This model works just fine in the pipeline. I can probe on the src pads for the nvinfer elements and see the model output for all 4 inferencing elements. Probing after the nvdsmetamux only shows model output from the nvinfer elements assigned to Source 0. See the “annotated take 2” image for diagrams of where I am probing.

I was hoping for help reviewing the pipeline graph for any noticeable issues. After debugging this further, it looks like the nvdsmetamux element is only combining meta data for Source 0, ignoring Source 1. Is there any reason the nvdsmetamux element would not combine the Source 1 metadata received on sink_3 and sink_4 with the Source 1 frame received on sink_0? Currently, it is combining the Source 0 metadata received on sink_1 and sink_2 with the Source 0 frame received on sink_0.
myModelConfig.txt (901 Bytes)

Thanks for the help!

Can you reproduce the issue with our sample deepstream_reference_apps/deepstream_parallel_inference_app at master · NVIDIA-AI-IOT/deepstream_reference_apps? It is hard to analysis the issue with your description.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.