I would like to ask a question related to the live-source setting and streammux configuration.
According to documentation, we set the live-source to 1 when the streams are live. Sources term here is RTSP Streams and USB Camera ? Correct me if I was wrong.
Another question is that, if I have a primary detector with only accept the input width (640) and input height (480), should I set the width and height of streammux to 640x480?
Is there any relationship between the frames outputted from streammux and input frames to primary detector? Does the deepstream apply any pre-processing steps?
[streammux] ##Boolean property to inform muxer that sources are live live-source=1 batch-size=1 ##time out in usec, to wait after the first buffer is available ##to push the batch even if the complete batch is not formed batched-push-timeout=40000 ## Set muxer output width and height width=1280 height=720 ## If set to TRUE, system timestamp will be attached as ntp timestamp ## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached # attach-sys-ts-as-ntp=1
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.11
• NVIDIA GPU Driver Version (valid for GPU only) 460
• Issue Type( questions, new requirements, bugs) Questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)