Issue with Parallel Inferencing in DeepStream (Python) – Video Not Playing

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson AGX Orin
• DeepStream Version 6.4
• JetPack Version (valid for Jetson only) 6.0+b106
• TensorRT Version 8.6.4
• NVIDIA GPU Driver Version (valid for GPU only) 12.2
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

working_parallel.txt (25.9 KB)

I am working on parallel inferencing in DeepStream using Python and would like to confirm if my approach is correct. While running the code, all plugins initialize successfully, but the video does not play, and Stream 0 keeps repeating infinitely.

  1. Is my implementation of parallel inferencing correct?
  2. What could be causing the video not to play, and why is Stream 0 repeating continuously?
  3. Is my way of arranging the plugins in pipeline correct?
  4. Is there a way to implement parallel inferencing in Python instead of C++?

Major Observed Issues:
No Video Output despite the RTSP streams being active.
Low FPS (PERF: {‘stream0’: 0.0, ‘stream1’: 0.0}) indicating frames are not being processed.
Pad Linking Errors causing incorrect data flow between elements.
Stream Format Not Found errors, leading to frame drops.

Any guidance or suggestions would be greatly appreciated. Thanks in advance!

From the graph you post, it is different to NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel.

NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. is open source, please compare your implementation with the sample.

DeepStream support python APIs. The bindings are also open source. Python Sample Apps and Bindings Source Details — DeepStream documentation

I have took reference for building the pipeline from this link NVIDIA-AI-IOT. The difference can be found in decoding but I have decoded the RTSP stream through Probe function. Now What are things i should change in my code to build a working pipeline?

Along with that, The newer version of my code works but video frames are not being still the end of the pipeline. Is there any special decoding step to be applied to RTSP stream to use them along with metamux?

This is the complete graph.

What do you mean by “special decoding step”?

I meant for Different types of decoding configs like nvv4l2decoder. Is there any specific decoding method for RTSP stream.

Is there any python implementation code for Parallel Inferencing ? Since this linkGITHUB only have C++ implementation of the code?

Depends on the video(payload) format inside the RSTP stream. nvv4l2decoder supports H264, H265, mjpeg formats.

No.

Can I implement parallel inferencing with python rather than using C++?

If it is possible to work with python, How far the code effectively work compared to C++?

Deepstream support python APIs. Python Sample Apps and Bindings Source Details — DeepStream documentation

It is just programming language, you can write the DeepStream app with the language you prefer.

If use the correct bindings and implement exactly the same logic as c++ sample, the python app works the same as c++ app.