• Hardware Platform GPU
• DeepStream Version 6.0.1
• TensorRT Version 22.214.171.124-1
• NVIDIA GPU Driver Version 470.103.01
• Issue Type: questions and errors
I use DeepStream SDK with python and try to handle a big number of RTSP streams (>100), with next pipeline:
rtspsrc ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvideoconvert ! capsfilter ! queue ! nvstreammux ! nvvideoconvert ! nvinfer ! nvtracker ! fakesink.
I use Gst-nvstreammux New (Beta) with adaptive batching with next config:
[property] algorithm-type=1 adaptive-batching=1 max-fps-control=1 overall-max-fps-n=6 overall-max-fps-d=1 overall-min-fps-n=4 overall-min-fps-d=1 max-same-source-frames=1
- First question related to the errors in the logs (despite the errors, pipeline works):
[Error while parsing streammux config file: Key file does not have key “enable-source-rate-control” in group “property”] [Error while parsing streammux config file: Key file does not have key “batch-size” in group “property”]
There is no information in the docs about enable-source-rate-control property.
And why does it require batch-size while I use adaptive batching?
- nvinfer also has batch-size property with properly created tensorrt model for such batch-size.
According to the fact that RTSP is not stable, I have cases when streammux collects not full batch. How does nvinfer work in that situation? For example, I have batch-size: 20, but collected only 15 frames. (I can’t wait to long to collect frames from all sources, because it will reduce the latency)