A question about deepstream_parallel_inference_app

**• Hardware Platform (Jetson / GPU) Jetson orin nano
**• DeepStream Version: 6.3
**• JetPack Version (valid for Jetson only) 5.1.2
**• TensorRT Version 8.5.2.2

I create my pipeline refer GitHub - NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel., And the pipeline like this

I want to know if primary_gie_0_bin and primary_gie_1_bin synchroized?
For example, assume gie1_bin is run faster, gie1_bin process on 3th batch, but gie0_bin process on 1th batch

All inferencing models work asynchronously. The metamux will synchronize the inferencing results.

ths for your reply, and where can I get metamux usage documentation

Yo can find the document here: deepstream_parallel_inference_app/tritonclient/sample/gst-plugins/gst-nvdsmetamux/README at master · NVIDIA-AI-IOT/deepstream_parallel_inference_app (github.com)

Thanks.
I get a question with metamux. I set four source, only source0 can detect vehicle, but I get source1、source2、source3 data about vehicle from kafka.


And my metamux is

But It works right when I replace source0 with source1 or source2、source3, I can’t get any data from kafka because no vehicle can be detected

Please provide the app configuration together with metamux configuration

config_metamux.txt (1.6 KB)
scs_pipeline.yml (6.0 KB)

Your branch0 and branch1 share the same sources(0,1,2,3), why did you say “only source0 can detect vehicle,”??

Your model 1 and model 2 inference on all 4 sources according to your configuration. Please refer to NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. (github.com)

I test two models, one is vehicle detection, the other is smoke or fire detection, source (0,1,2,3) is the test case I test vehicle detection, only source0 video has vehicles, source (1,2,3) video has no vehicles, so I say that “only source0 can detect vehicle”.

So please post your sources_different_source.csv file too.

sources_different_source.csv (269 Bytes)

When I turned off the brand0 pre-processing and ran the vehicle detect model directly, it work right.
kafka has only source0 vehicle data.
My preprocess config:

Why do you use nvpreprocess? Can you provide the graph png image, the graph image you post can not be viewed clearly.

Because I only want to detecet the roi, not full frame.

Can you upload the “PNG” file but not JPEG file? It is too small to view.

How should I upload png, I upload the pipelien.png, size is about 2M, but I don’t know why it will become too small after uploaded






Can you enable the filesink and put the output video here for us to check how the bboxes looks like?

Sorry so late to reply, I can’t save output video as mp4, but I recorded a video on my phone.