Parallel Pipeline with Metamux does not show inference results on both branches

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.0.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Bug
• How to reproduce the issue ?
Here is the Pipeline Graph.

I am trying to create a parallel inference app (as shown in the graph). It correctly displays my 4 output streams, but does not show the inference on streams 3 and 4 (basically branch 2).

Can someone please explain where the issue is, is it coming from the metamux or other components?
My metamux config:

[property]
enable=1
# sink pad name which data will be pass to src pad.
active-pad=sink_0
# default pts-tolerance is 60 ms.
pts-tolerance=60000

[user-configs]

[group-0]
# src-ids-model-<model unique ID>=<source ids>
# mux all source if don't set it.
# src-ids-model-10=0
# src-ids-model-1=2;3

Can you post all your configurations? deepstream_reference_apps/deepstream_parallel_inference_app at master · NVIDIA-AI-IOT/deepstream_reference_apps (github.com)

Hello @Fiona.Chen

I have created a custom YAML config parser. that creates and connects my elements. Here is my config:

general-params:
  save-graph: true
plugins:     
  - name: "stream-1"
    type: "source"
    path: "1.avi"
    drop-frame-interval: 0
    connected-to:
      - streammux

  - name: "stream-2"
    type: "source"
    path: "2.avi"
    drop-frame-interval: 0
    connected-to:
      - streammux

  - name: "stream-3"
    type: "source"
    path: "videoplayback.avi"
    drop-frame-interval: 0
    connected-to:
      - streammux

  - name: "stream-4"
    type: "source"
    path: "TownCentreXVID.avi"
    drop-frame-interval: 0
    connected-to:
      - streammux

  - name: "streammux"
    type: "nvstreammux"
    connected-to:
      - "filter-source"
      - "metamux"

  - name: "filter-source"
    type: "filter-source"
    connected-to:
      - branch: "vehicle-branch"
        sources: [0, 1]
      - branch: "people-branch"
        sources: [2, 3]

  # GIE elements for inference on streams 1 and 2 (vehicle processing)
  - name: "vehicle-branch"
    type: "custom-branch"
    add-queue-at-start: true
    add-queues-between-elements: true
    branch-plugins:
      - name: "vehicle-inference"
        type: "gie"
        config-file-path: "./config/vehicle_type_detection_yolo_config/pgie_vehicle_type_detection_config.txt"
        probes:
         - type: "bbox-color"
           pad: "src"
           parameters:
            class_colors:
              person: [1.0, 0.0, 0.0]  # Red for person
              car: [0.0, 1.0, 0.0]     # Green for car
              3: [0.0, 0.0, 1.0]       # Blue for motorbike (by ID)
              bus: [1.0, 1.0, 0.0]     # Yellow for bus
              truck: [1.0, 0.0, 1.0]   # Magenta for truck
         - type: "latency"
           pad: "src"
           parameters:
            interval: 10
            measure_latency: false

      - name: "vehicle-tracker"
        type: "nvtracker"
        config-file-path: "./config/vehicle_tracker_config/dstest2_tracker_config.txt"

    connected-to: 
      - "metamux"

    # GIE elements for inference on streams 3 and 4 (people processing)
  - name: "people-branch"
    type: "custom-branch"
    add-queue-at-start: true
    add-queues-between-elements: true
    branch-plugins:
      - name: "people-inference"
        type: "gie"
        config-file-path: "./config/vehicle_type_detection_yolo_config/pgie_vehicle_type_detection_config1.txt"
        probes:
         - type: "bbox-color"
           pad: "src"
           parameters:
            class_colors:
              person: [1.0, 0.0, 0.0]  # Red for person
         - type: "latency"
           pad: "src"
           parameters:
            interval: 10
            measure_latency: false
      - name: "people-tracker"
        type: "nvtracker"
        config-file-path: "./config/vehicle_tracker_config/dstest2_tracker_config1.txt"

    connected-to: 
      - "metamux"

  - name: "metamux"
    type: "nvdsmetamux"
    config-file : "./config/metamux/metamux_config.txt"
    connected-to:
      - "vehicle-middle-elements"
      # - "vehicle-msg"

  - name: "vehicle-middle-elements"
    type: "custom-branch"
    add-queues-between-elements: true
    add-queue-at-end: true
    branch-plugins:
      - name: "vehicle-nvdslogger"
        type: "nvdslogger"

      - name: "vehicle-tiler"
        type: "nvmultistreamtiler"
        height: 720
        width: 1280
        probes:
          - type: "event-message-vehicle"
            pad: "sink"
            parameters:
              interval: 10

      - name: "vehicle-nvvidconv"
        type: "nvvideoconvert"
        nvbuf-memory-type: 3

      - name: "vehicle-nvosd"
        type: "nvdsosd"
        osd-process-mode: 0
        osd-display-text: 1

    connected-to:
      - "vehicle-display-queue"
      - "vehicle-msg"




  - name: "vehicle-display-queue"
    type: "queue"
    connected-to:
      - "vehicle-display"


  - name: "vehicle-display"
    type: "displaysink"
    sync: 1
    async: 0
    qos: 0




  - name: "vehicle-msg"
    type: "custom-branch"
    add-queue-at-start: true
    enabled: true
    branch-plugins:
      - name: "vehicle-queuemsgconv"
        type: "queue"

      - name: "vehicle-msgconv"
        type: "msgconv"
        config: "/opt/nvidia/deepstream/deepstream/sources/python/stream_service/config/nvmsgconv_config/msgconv_config.txt"
        payload-type: 0
        msg2p-lib: "/opt/nvidia/deepstream/deepstream/sources/python/stream_service/lib/custom_libraries/libs/nvmsgconv/libnvds_msgconv.so"

      - name: "vehicle-broker"
        type: "msgbroker"
        proto-lib: "/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so"
        conn-str: "localhost;9092;deepstream-metadata-vehicle"
        sync: 0
        topic: "my-topic"

One thing to also note is that I am using the NEW NvStreamMux since the old one was giving me a hard time.
The batch size is set in the code based on the number of source streams (if you are wondering why it is not set in the config).

Please refer to the implementation of deepstream_reference_apps/deepstream_parallel_inference_app at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub

Hello again @Fiona.Chen .

I actually did refer to this implementation and did the same, but for some reason the Metamux is not behaving correctly as expected.

As already explained, we only get the metadata from one branch (and not from both)…

Thanks for the help!

The pipeline graph of the parallel sample is like

We will correct the graph in the github repo

Does the deepstream_reference_apps/deepstream_parallel_inference_app at master · NVIDIA-AI-IOT/deepstream_reference_apps (github.com) sample work for you? You may try your configurations with our sample, and then implement your own app by referring to our sample.

@Fiona.Chen We are doing the same exact structure you uploaded (check the uploaded pipeline graph, it shows that)

And it is still not working for ALL branches.

Does the deepstream_reference_apps/deepstream_parallel_inference_app at master · NVIDIA-AI-IOT/deepstream_reference_apps (github.com) sample work for you?

@Fiona.Chen
Here is the output when running the sample with the default configs (source4_1080p_dec_parallel_infer.yml with the sources_4_different_source.csv)

@Fiona.Chen changed the metamux config to the following (commented out the src-ids-model, just like in my config)

[property]
enable=1
# sink pad name which data will be pass to src pad.
active-pad=sink_0
# default pts-tolerance is 60 ms.
pts-tolerance=60000

[user-configs]

[group-0]
# src-ids-model-<model unique ID>=<source ids>
# mux all source if don't set it.
# src-ids-model-1=0;1
# src-ids-model-2=1;2
# src-ids-model-3=1;2`

and got the following result:

Yes. The default configuration output and your modified configuration output are all correct. You may refer to the sample to implement your own app.

@Fiona.Chen
My app is written in Python, and it follows the same pipeline configuration as the sample.
Is it possible that this does not work in Python? I cannot think of any other issue.

It is just another programming language. If the pipeline and configuration is the same, the result is the same.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.