Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): Jetson AGX Orin • DeepStream Version 6.4 • JetPack Version (valid for Jetson only) 6.0+b106 • TensorRT Version 8.6.4 • NVIDIA GPU Driver Version (valid for GPU only) 12.2 • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I am working on parallel inferencing in DeepStream using Python and would like to confirm if my approach is correct. While running the code, all plugins initialize successfully, but the video does not play, and Stream 0 keeps repeating infinitely.
Is my implementation of parallel inferencing correct?
What could be causing the video not to play, and why is Stream 0 repeating continuously?
Is my way of arranging the plugins in pipeline correct?
Is there a way to implement parallel inferencing in Python instead of C++?
Major Observed Issues:
No Video Output despite the RTSP streams being active.
Low FPS (PERF: {‘stream0’: 0.0, ‘stream1’: 0.0}) indicating frames are not being processed.
Pad Linking Errors causing incorrect data flow between elements.
Stream Format Not Found errors, leading to frame drops.
Any guidance or suggestions would be greatly appreciated. Thanks in advance!
I have took reference for building the pipeline from this link NVIDIA-AI-IOT. The difference can be found in decoding but I have decoded the RTSP stream through Probe function. Now What are things i should change in my code to build a working pipeline?
Along with that, The newer version of my code works but video frames are not being still the end of the pipeline. Is there any special decoding step to be applied to RTSP stream to use them along with metamux?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks