Pixel distortions when the pipeline's FPS falls below the frame rate of the RTSP sources

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.3 (docker image: nvcr.io/nvidia/deepstream:6.3-triton-multiarch)
• NVIDIA GPU Driver Version (valid for GPU only) 525.147.05
• Issue Type( questions, new requirements, bugs) Questions/Bugs

Hi,

I am using the deepstream reference application (apps/sample_apps/­deepstream-app) enabling the New Gst-nvstreammux ( export USE_NEW_NVSTREAMMUX=yes). The inputs are RTSP sources.

I have observed pixel distortions on the sink (type: 2) when the pipeline’s performance measurement (FPS) falls below the frame rate of the RTSP sources, which is 25fps.

To replicate this issue, I configured the application with 20 sources. I have 2 IP cameras, and I duplicated each stream 10 times within the sources_rtsp.csv file. A video example of pixel distortions can be found here:
pixel_distortions_example.zip (17.9 MB)

However, when I reduced the number of sources to 10, the pipeline can achieve a performance measurement of 25 FPS and the pixel distortions disappeared.

Has anyone encountered this issue before?
Your assistance would be greatly appreciated.
Thank you.

The deepstream-app configuration is:

application:
  enable-perf-measurement: 1
  perf-measurement-interval-sec: 5

tiled-display:
  enable: 1
  rows: 4
  columns: 5 
  width: 1920
  height: 1080
  gpu-id: 0
  nvbuf-memory-type: 0

source:
  csv-file-path: sources_rtsp.csv

sink0:
  enable: 1
  type: 2
  sync: 0
  source-id: 0
  gpu-id: 0
  nvbuf-memory-type: 0

osd:
  enable: 1
  gpu-id: 0
  border-width: 1
  text-size: 15
  text-color: 1;1;1;1
  text-bg-color: 0.3;0.3;0.3;1
  font: Serif
  show-clock: 0
  clock-x-offset: 800
  clock-y-offset: 820
  clock-text-size: 12
  clock-color: 1;0;0;0
  nvbuf-memory-type: 0


streammux:
  attach-sys-ts: 1
  config-file: config_mux.txt

primary-gie:
  enable: 1
  gpu-id: 0
  model-engine-file: ../../models/Primary_Detector/resnet10.caffemodel_b20_gpu0_int8.engine
  #Required to display the PGIE labels, should be added even when using config-file
  #property
  batch-size: 20
  #Required by the app for OSD, not a plugin property
  bbox-border-color0: 1;0;0;1
  bbox-border-color1: 0;1;1;1
  bbox-border-color2: 0;0;1;1
  bbox-border-color3: 0;1;0;1
  interval: 0
  #Required by the app for SGIE, when used along with config-file property
  gie-unique-id: 1
  nvbuf-memory-type: 0
  config-file: config_infer_primary.yml

where sources_rtsp.csv contains:

enable,type,uri,num-sources,gpu-id,cudadec-memtype
1,4,rtsp://10.10.11.123:554,1,0,0
1,4,rtsp://10.10.11.123:554,1,0,0
1,4,rtsp://10.10.11.123:554,1,0,0
1,4,rtsp://10.10.11.123:554,1,0,0
1,4,rtsp://10.10.11.123:554,1,0,0
1,4,rtsp://10.10.11.123:554,1,0,0
1,4,rtsp://10.10.11.123:554,1,0,0
1,4,rtsp://10.10.11.123:554,1,0,0
1,4,rtsp://10.10.11.123:554,1,0,0
1,4,rtsp://10.10.11.123:554,1,0,0
1,4,rtsp://10.10.11.122:554,1,0,0
1,4,rtsp://10.10.11.122:554,1,0,0
1,4,rtsp://10.10.11.122:554,1,0,0
1,4,rtsp://10.10.11.122:554,1,0,0
1,4,rtsp://10.10.11.122:554,1,0,0
1,4,rtsp://10.10.11.122:554,1,0,0
1,4,rtsp://10.10.11.122:554,1,0,0
1,4,rtsp://10.10.11.122:554,1,0,0
1,4,rtsp://10.10.11.122:554,1,0,0
1,4,rtsp://10.10.11.122:554,1,0,0

and config_mux.txt contains:

[property]
adaptive-batching=1
frame-duration=-1
batch-size=20
## Set to maximum fps
overall-min-fps-n=25
overall-min-fps-d=1
## Set to ceil(maximum fps/minimum fps)
max-same-source-frames=1

[source-config-0]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-1]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-2]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-3]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-4]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-5]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-6]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-7]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-8]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-9]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-10]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-11]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-12]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-13]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-14]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-15]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-16]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-17]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-18]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-19]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

Seems you are using the IP camera. Does the camera’s bandwidth support 10 streams on the same port?

Have you follow the instruction in DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums to set the nvstreammux parameters?

What is your GPU? Can you measure the GPU performance by “nvidia-smi dmon” command when you run the 20 streams case?

I conducted a test and found that the camera can support up to 10 streams on the same port. Indeed, when I utilize 10 sources from the same camera, I do not observe any pixel distortions.

Further tests were conducted with 20 sources: from 20 different cameras, from 4 different cameras (each camera stream duplicated 5 times), and from 2 different cameras (each camera stream duplicated 10 times). In all these scenarios involving 20 sources, pixel distortions are observed.

I followed the instructions to configure the nvstreammux parameters, as detailed in the config_mux.txt file mentioned in the initial post of this thread.

My GPU is a GeForce RTX 3050 Ti Laptop 4GB.

The performance observed with 10 streams using “nvidia-smi dmon” and “nvidia-smi” is as follows:

The performance observed with 20 streams using “nvidia-smi dmon” and “nvidia-smi” is as follows:

The hardware decoder is overloaded with 20 streams. You may try some better GPUs.

What is the format(264,265,etc,…), frame rate and resolution of your RTSP cameras?

The RTSP camera streams have the following characteristics:

  • Format: H.264
  • Resolution: 1920x1080
  • Frame rate: 25 FPS

Additionally, I have another laptop equipped with an RTX 4070 8GB GPU, which demonstrates superior performance compared to the RTX 3050. It can decode a greater number of streams without any pixel distorsions.

I have also experimented with the H.265 format, and with this format, I am able to handle more streams effectively.

How can I determine the maximum number of streams a GPU’s decoder can manage, based on the GPU model technical specifications?

Thanks

There is no decoding performance data for GeForce laptop GPUs. You may need to test the maximum frame rate by your self.

Theoretically, H265 decoding is faster than H264 decoding.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.