In Deepstream pipeline: I use yolov3 model (batchsize 16) in nvinfer,and sink is rtmpsink;
the stream server is nginx with http-flv model.
when camera is 1 the stream is good
when camera is 8 the stream is warn by VLC:picture is too late to be displayed … ;and Viewing the stream through the browser is stuck。Is it processed at the push end to make the video stream smoother or at the playback end?
• Hardware Platform (Jetson / GPU)
• DeepStream Version5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)