Building a Parallel Inference Pipeline in Python. Need the MetaMux plugin in Python bindings to collect all outputs together in unified NvDsMetaData

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 6.2 (docker image)
• TensorRT Version 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only) 525.85.12

Hi, I’m constructing a parallel inference pipeline in Python. I have 2 branches. I’d like to collect outputs from these branches together in a unified NvDsMetaData. Is there a MetaMux plugin available in Python bindings? If not, how can I build this plugin? Thanks!

The plugin can be used as it is in python as the other plugins.

Thanks. I was able to use the MetaMux plugin with: metamux=Gst.ElementFactory.make(“nvdsmetamux” ,“metamux”)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.