Hi, I’m encountering a performance inconsistency in DeepStream when processing two simulated RTSP streams and would appreciate your insights.
Issue Description
- Setup: Two RTSP streams (simulated) processed via:
- Single pipeline (using
nvurisrcbin+new streammux) - Two independent pipelines (one per stream)
- Single pipeline (using
- Observation:
- Single pipeline: Missed detections (496 total vs. baseline 596) .
- Independent pipelines: Better accuracy (576 detections, 0 misses).
- Config: Streammux settings prioritize batch processing (
max-num-frames-per-batch=1/2). Full config [attached/pasted in link].
Questions
- Why does the single pipeline underperform despite identical streams? Could it relate to:
- Resource contention in
streammux? - Suboptimal batching for multi-stream in one pipeline?
- How can I tune the single pipeline to match the performance of independent pipelines?
Additional Context
- Local PC: Results differ (as above).
- Cloud Server: Both approaches perform equally well (likely due to higher resources).
- Goal: Achieve consistent accuracy regardless of pipeline architecture.
I’d appreciate any suggestions to balance performance or debug the single-pipeline bottleneck. Thank you!
config_mux_source3_ancien.txt (2.5 KB)