Parallel inference pipeline, and linking

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi I am trying to implement the deepstream-parallel inference in python…
Below is my code… I am unable to get the model inference passed to the metamux…
Can you tell me where am i correct and wrong? (10.8 KB)

The output::

Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source_bin 1

Creating Tee

Creating nvstreamdemux
vehiclecount defaultdict(<class ‘list’>, {‘vehiclecount’: [0, 1]})
Creating the pgie
Creating gst-dsmetamux

Creating tiler

Creating nvvidconv

Creating nvosd

Creating H265 Encoder
Creating H265 rtppay
Adding elements to Pipeline

What does this mean?

I mean linking the pgie element to the metamux element.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Seems your pipeline is not correct. Please refer the attached pipeline as an example. (3.3 MB)

Please read the code carefully, there are lots of details which are not mentioned in the document.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.