There is a phenomenon that occurs with offline video and RTSP streams that I don't understand

When I set the source to 1 channel of offline video, the output detected video is stuck (WARNING: Overriding infer-config batch-size (4) with number of sources (1)), but when I input the same number of offline video as batch-size (4), it is smooth.
At this point I modified the input to RTSP and only input one way RTSP, the detected video is smooth.
Why is that?

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

deepstream-app version 6.1.0
DeepStreamSDK 6.1.0
CUDA Driver Version: 11.4
CUDA Runtime Version: 11.0
TensorRT Version: 8.2
cuDNN Version: 8.4
libNVWarp360 Version: 2.0.1d3
Device on:A6000

Could you please attach your pipeline?

About the pipeline, I used deepstream-test3, no change.

nvstreammux -> nvinfer -> nvdslogger -> nvtiler -> nvvidconv -> nvosd->sink

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
Thanks

We suggest you set the batch-size as much as the number of sources.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.