Feedback on Issue with deepstream-test5 Handling 16 1080p RTSP Streams

When utilizing the deepstream-test5 example to process 16 streams of 1080p RTSP videos, I have observed a problem where the output video streams appear to be blurry. However, reducing the number of RTSP inputs to 9 or less eliminates the blurriness. Additionally, setting drop-frame-interval=2 to discard some frames also prevents the occurrence of screen artifacts.

Environment Information:

  • Operating System: Ubuntu 20.04 running in Docker
  • GPU: NVIDIA GeForce GTX 1080 Ti
  • DeepStream Version: 6.3

Here is the relevant section of my application configuration file for reference:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0


[source-list]
num-source-bins=16
list=rtsp://192.168.99.151:8554/mystream9;rtsp://192.168.99.151:8554/mystream10;rtsp://192.168.99.151:8554/mystream11;rtsp://192.168.99.151:8554/mystream12;rtsp://192.168.99.151:8554/mystream19;rtsp://192.168.99.151:8554/mystream20;rtsp://192.168.99.151:8554/mystream21;rtsp://192.168.99.151:8554/mystream22;rtsp://192.168.99.151:8554/mystream23;rtsp://192.168.99.151:8554/mystream24;rtsp://192.168.99.151:8554/mystream25;rtsp://192.168.99.151:8554/mystream26;rtsp://192.168.99.151:8554/mystream27;rtsp://192.168.99.151:8554/mystream28;rtsp://192.168.99.151:8554/mystream29;rtsp://192.168.99.151:8554/mystream30
sgie-batch-size=16

[source-attr-all]
enable=1
type=4
num-sources=1
gpu-id=0
cudadec-memtype=0
latency=150
rtsp-reconnect-interval-sec=0



[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=6

#   msg-conv-
msg-conv-config=/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-test6/configs/dstest6_msgconv_sample_config.yml

#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=257
msg-conv-msg2p-new-api=1
msg-conv-frame-interval=25

msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
#  Provide your msg-broker-conn-str here
msg-broker-conn-str=192.168.11.224;9092;dstest
topic=dstest
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt


[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=nvdrmvideosink
type=4
#1=h264 2=h265
codec=2
#encoder type 0=Hardware 1=Software
enc-type=0
sync=1
bitrate=5000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
# set profile only for hw encoder, sw encoder selects profile based on sw-preset
profile=10
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400


[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=16
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=100000
## Set muxer output width and height
width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=1
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
batch-size=16
config-file=/opt/nvidia/deepstream/deepstream-6.3/samples/configs/DeepStream-Yolo/config_infer_primary_yoloV5lite.txt

[tests]
file-loop=1

Steps to Reproduce:

  1. Run the deepstream-test5 example with 16 streams of 1080p RTSP videos.
  2. Observe the quality of the output video streams.

Expected Result:

I expect that, even with 16 streams of 1080p RTSP videos, the output video streams should not exhibit blurriness.

output

Additional Information:

  • I have attempted to mitigate the issue by setting drop-frame-interval=2 to reduce the frame rate. While this seems to alleviate the problem, it is not an ideal solution.

Questions:

  1. Are there any limitations regarding the number of 1080p RTSP video streams (16 in my case) that can be processed simultaneously?
  2. Are there any recommendations or optimization parameters to enhance the output video quality?

I appreciate your time and assistance in addressing this matter. Looking forward to your guidance.

The hardware video encoder is the limitation. You may monitor the hardware performance with the command “nvidia-smi dmon” while running the app.

And the CPU performance may be another possible limitation, please also monitor the CPU performance while running the app.

Thank you for your reply!
** nvidia-smi dmon**


** cpu **

Seems the CPU loading is high. The output image you show in this topic tells us there are packet lost during RTSP streams transferring. The high CPU loading may be the reason.

How should I optimize this?
Thank you!

When I set the fps to 6, I encounter blurriness in the video frames, especially when detecting moving objects. How can I address this issue? Your assistance would be greatly appreciated. Thank you.
** output **


** cpu **

** gpu **
image

When I input 16 RTSP video streams with fps set to 24 and no model applied, the output video still exhibits pixelation. I suspect that this is unlikely due to network bandwidth constraints.

** config **

[primary-gie]
enable=0
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
batch-size=16
...

** output **

Thank you

Please use local display with sink type 2 to check whether the packet lost is from the input RTSP sources first.

I changed the source type to 2 , and similar issues still occur, especially when there are moving objects.

So the packets lost happen with the RTSP source. You use the same RTSP stream as inputs, please check whether the port 8554 of 192.168.99.151 can support such large bandwidth.

image

Suppose your RTSP stream’s bitrate is 5M bps, then 16X5=80 is the total bits needed for each second.

What is the packet lost rate of port 8554 of 192.168.99.151 when you run the 16 input streams case?

And the bitrate can fluent from second to second. You need to guarantee the max instant bitrate is in the badwidth limitation too.

How should I test the packet loss rate for this?
Thank you
image

I think iperf can. Please search by google. iPerf - iPerf3 and iPerf2 user documentation

When I use 16 URL network video streams as input, there is also no occurrence of pixel blurring. Moreover, I have observed that certain RTSP streams experience frame dropping issues.

http://stream4.iqilu.com/ksd/video/2020/02/17/97e3c56e283a10546f22204963086f65.mp4

For the RTSP streams from WAN, it is possible to lost packets during transferring if the payload transferring is on UDP. Please check with the RTSP server.

  1. I push 16 RTSP streams using the TCP protocol, and do not experience any packet loss.
  2. The network bandwidth has been verified to be sufficient.
  3. When using a higher performance device, after setting [primary-gie] enable=0, there are no pixelation issues when pushing 16 RTSP streams.
  4. I have observed that when the deepstream app is started initially and the FPS is less than 24, pixelation occurs in the first few seconds of the video.
  5. With [primary-gie] enable=1 and FPS around 19, is it possible that this is causing the pixelation issue?

Thank you for patiently answering my questions

** primary-gie enable=1 fps **

When I set the RTSP push FPS to be less than or equal to 19, the pixelation issue basically does not occur again.

** output **

What is the RTSP streams’ FPS? If enable PGIE will cause the FPS dropping, maybe the PGIE(and it’s postprocessing) is the bottleneck. Please monitor the GPU loading and CPU loading while running with PGIE.

  1. When [primary-gie] enable=1 and the FPS is around 19, there will be no pixelation.
  2. I am using the YOLOv5s model which has been quantized to int8 and has CUDA post-processing. Is there any other way to accelerate the model’s inference speed?

** [primary-gie] enable=1 and FPS around 19 **