Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi I am trying to implement the deepstream-parallel inference in python…
Below is my code… I am unable to get the model inference passed to the metamux…
Can you tell me where am i correct and wrong?
parallel_pipeline.py (10.8 KB)
The output::
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source_bin 1
Creating Tee
Creating nvstreamdemux
vehiclecount defaultdict(<class ‘list’>, {‘vehiclecount’: [0, 1]})
LINKINGGGGGG 0
LINKINGGGGGG 1
Creating the pgie
OUTSIDE LOOP
Creating gst-dsmetamux
Creating tiler
Creating nvvidconv
Creating nvosd
Creating H265 Encoder
Creating H265 rtppay
Adding elements to Pipeline