Does version 6.1.1 support single pgie, parallel sgies?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): GeForce RXT 3090
• DeepStream Version: 6.1.1
• TensorRT Version: 8.4.1.5
• NVIDIA GPU Driver Version (valid for GPU only): 515.65.01
• Issue Type( questions, new requirements, bugs): question

This is a follow-up question for this thread: Does DeepStream support running multiple classifiers in parallel after a detector?

I saw that in version 6.1.1 introduces nvdsmetamux from this post Parallel Inference example in DeepStream. The samples on github support multiple PGIE+SGIE branches running in parallel.

How do I support support a single pgie and multiple sgies?

sources -> muxer -> detector |-> classifier1 |                  
                             |-> classifier2 | -> [process NvDsBatchMeta]
                             |-> classifier3 |

The deepstream-test2 is the sample for one PGIE + multiple SGIEs. The SGIEs works in parallel.

1 Like

@Fiona.Chen,

Thank you for the reply. The code from deepstream-test2 (DS 6.1.1) links the components as follow:

gst_bin_add_many (GST_BIN (pipeline),
        source, h264parser, decoder, streammux, pgie, nvtracker, sgie1, sgie2, sgie3,
        nvvidconv, nvosd, sink, NULL);

So even though we link multiple SGIEs sequentially, they still runs in parallel? In other words, the latency of running 3 SGIEs is max(sgie1-latency, sgie2-latency, sgie3-latency) [runs in parallel] instead of sum(sgie1-latency, sgie2-latency, sgie3-latency) [runs sequentially]?

Yes. The inferencing happens asynchronously. For example, when the nth frame is inferred by SGIE1, the (n-1)th frame is handled by SGIE2.

No. The latency is the sum of them or sometimes even more for there is buffer pool inside nvinfer.

If you are interested for models work in parallel, please refer to NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. (github.com), but the pipeline should be different to deepstream_parallel_inference_app, you may need to demux and mux after PGIE instead of before PGIE.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.