Parallel Inference with different streamux for each pgie

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1
• TensorRT Version 8.2
• NVIDIA GPU Driver Version (valid for GPU only) 515
• Issue Type( questions, new requirements, bugs) question

I have read the Parallel Inference example in DeepStream with the same nvstreammux for all models. However, in my case, I use 2 models, each of which requires different input_shape, so I think it is required to use two nvstreammux so that each model can get the right input_shape. How do you think or do you have any recommendation?

gst-nvinfer or gst-nvinferserver will handle the different input shape of the different models. It has nothing to do with nvdsmetamux. You don’t need two nvdsmetamuxs.

1 Like

Btw, does python-binding support this Parallel Inference yet? If yes, can you share me some python sample(s)?

We don’t have python sample for this app. DeepStream supports python bindings. NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications (github.com).

Thank you for your reply. Even though DeepStream supports python bindings, I think it is still not clear how to build up the pipeline with streamux → tee → multiple nvstreamdemux → nvstreamux → … → metamux with python binding.

Can you share some snippet code for this?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Do you know GStreamer and gst-python? GStreamer: open source multimedia framework, Python GStreamer Tutorial (brettviren.github.io). Please study GStreamer and gst-python before you start with DeepStream.

There are many DeepStream binding samples in (GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications), please investigate carefully.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.