Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): Jetson AGX Orin
• DeepStream Version 6.4
• JetPack Version (valid for Jetson only) 6.0+b106
• TensorRT Version 8.6.4
• NVIDIA GPU Driver Version (valid for GPU only) 12.2
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
working_parallel.txt (25.9 KB)
I am working on parallel inferencing in DeepStream using Python and would like to confirm if my approach is correct. While running the code, all plugins initialize successfully, but the video does not play, and Stream 0 keeps repeating infinitely.
- Is my implementation of parallel inferencing correct?
- What could be causing the video not to play, and why is Stream 0 repeating continuously?
- Is my way of arranging the plugins in pipeline correct?
- Is there a way to implement parallel inferencing in Python instead of C++?
Major Observed Issues:
No Video Output despite the RTSP streams being active.
Low FPS (PERF: {‘stream0’: 0.0, ‘stream1’: 0.0}) indicating frames are not being processed.
Pad Linking Errors causing incorrect data flow between elements.
Stream Format Not Found errors, leading to frame drops.
Any guidance or suggestions would be greatly appreciated. Thanks in advance!