Live Source Definition and Streammux Resolution

I would like to ask a question related to the live-source setting and streammux configuration.

According to documentation, we set the live-source to 1 when the streams are live. Sources term here is RTSP Streams and USB Camera ? Correct me if I was wrong.

Another question is that, if I have a primary detector with only accept the input width (640) and input height (480), should I set the width and height of streammux to 640x480?

Is there any relationship between the frames outputted from streammux and input frames to primary detector? Does the deepstream apply any pre-processing steps?

##Boolean property to inform muxer that sources are live
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
## Set muxer output width and height
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.11
• NVIDIA GPU Driver Version (valid for GPU only) 460
• Issue Type( questions, new requirements, bugs) Questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)


No. It is not necessary.

Yes. The output of nvstreammux is the batched frames, the batched frames will be the input of inference module. Inside gst-nvinfer plugin(Gst-nvinfer — DeepStream 5.1 Release documentation), there is pre-processing such as scaling, normalization, color format transferring,…

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.