Long latecny of when using HLS input

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Tesla T4
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
455.45.01
• Issue Type( questions, new requirements, bugs)
bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Us gstreamer and DS v3 with 10 HLS sources,
uridecodebin uri=‘file:///storage/ext_media/classifier0/bunny/playlist.m3u8’ ! identity sleep-time=400 ! ‘video/x-raw, format=I420’ ! videoconvert ! videorate ! ‘video/x-raw, format=NV12, framerate=10/1’ ! nvvidconv gpu-id=0 ! ‘video/x-raw(memory:NVMM)’
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
uridecodebin

Input is HL , segment size 2 seconds , live mode
When I run the pipeline with single stream the glass to glass latency is about 01 seconds.
When adding 10 stream the latency accomulates and reach uto 100 seconds of glass to glass latency
What are the potion to reduce the latency

What is the format, resolution and FPS of your HLS stream?

What do you mean by “When adding 10 stream …”? What is the pipeline?

Video format: libx264
Frame rate: 10 fps
HLS segment duration: 2 seconds
HLS list: 10 segments
The pipeline looks as follows:

gst-launch-1.0 -vv nvstreammux name=mux width=544 height=544 batch-size=32 gpu-id=0 batched-push-timeout=200000 !
nvvidconv num-buffers-in-batch=32 gpu-id=0 !
‘video/x-raw(memory:NVMM), format=RGBA’ !
infer
queue !
fakesink
uridecodebin uri=‘file:///storage/ext_media/classifier0/c1/playlist.m3u8’ is-live=true sync-to-first=true ! identity sleep-time=80000 ! ‘video/x-raw, format=I420’ ! videoconvert ! videorate ! ‘video/x-raw, format=NV12, framerate=10/1’ ! nvvidconv gpu-id=0 ! ‘video/x-raw(memory:NVMM)’ ! tee name=t0
t0. ! nvvidconv gpu-id=0 ! ‘video/x-raw’ ! videorate ! ‘video/x-raw, framerate=10/1’ ! videoscale ! ‘video/x-raw, width=320, height=240’ ! jpegenc quality=23 ! multifilesink location=/storage/ext_media/classifier0/images/0_%d.jpg
t0. ! queue ! mux.sink_0
uridecodebin uri=‘file:///storage/ext_media/classifier0/c1_1/playlist.m3u8’ is-live=true sync-to-first=true ! identity sleep-time=40000 ! ‘video/x-raw, format=I420’ ! videoconvert ! videorate ! ‘video/x-raw, format=NV12, framerate=10/1’ ! nvvidconv gpu-id=0 ! ‘video/x-raw(memory:NVMM)’ ! tee name=t1
t1. ! nvvidconv gpu-id=0 ! ‘video/x-raw’ ! videorate ! ‘video/x-raw, framerate=10/1’ ! videoscale ! ‘video/x-raw, width=320, height=240’ ! jpegenc quality=23 ! multifilesink location=/storage/ext_media/classifier0/images/1_%d.jpg
t1. ! queue ! mux.sink_1

You can check the CPU and GPU loading when you run the 10 streams case to find the possible bottleneck

I have check the GPU and CPU load are abour 40%

Do you mean both CPU loading and GPU loading are 40% for 10 streams? What is the loading of one stream?

How did you monitor the GPU loading? Can you post the log of “nvidia-smi dmon” command?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.