Gst_pad_add_probe unable to read buffer for multiple branches in the parallel pipeline

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

GPU
Deepstream version- 6.1
TRT Version: 8.4.1.5
NVIDIA DRIVER NVIDIA-SMI 525.78.01 Driver Version: 525.78.01 CUDA Version: 12.0

Hi,
I am trying to create a parallel inference pipeline, keeping the reference parallel cpp in mind.
Here is my pipeline.
I am only able to print the buffers for the first model only a few times, and then the pipeline stops.

Did you refer to https://github.com/NVIDIA-AI-IOT/deepstream_parallel_inference_app?
Could you attach your log with GST_DEBUG=3?

If you read this:: “keeping the reference parallel cpp in mind.”

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Because these are parallel , which is your first model? Could you attach your minimized demo code?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.