Parallel Inference App with Multiple RTSP Sources at Different FPS

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.0 (docker image: nvcr.io/nvidia/deepstream:7.0-triton-multiarch )
• NVIDIA GPU Driver Version (valid for GPU only) 535.171.04

Hi,

I am currently working with the deepstream_parallel_inference_app and trying to process multiple RTSP sources that have different frame rates (FPS).

I’ve tested this two configurations:

  1. bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml
  2. vehicle_lpr_analytic/source4_1080p_dec_parallel_infer.yml

In both cases, I enabled only sink0 (EglSink), and modified the streammux properties to utilize a new streammux configuration:

streammux:
  batch-size: 4
  ## Set muxer output width and height
  width: 1920
  height: 1080
  config-file: config_new_streammux.txt

For the sources, I used the following source.csv file:

enable,type,uri,num-sources,gpu-id,cudadec-memtype
1,4,rtsp://localhost:8554/stream_0,1,0,0
1,4,rtsp://localhost:8554/stream_1,1,0,0
1,4,rtsp://localhost:8554/stream_2,1,0,0
1,4,rtsp://localhost:8554/stream_3,1,0,0
  • stream_0 and stream_1 are running at 30 FPS
  • stream_2 and stream_3 are running at 20 FPS

Here’s the content of my config_new_streammux.txt file:

[property]
adaptive-batching=1
## Set to maximum fps
overall-min-fps-n=30
overall-min-fps-d=1
## Set to ceil(maximum fps/minimum fps)
max-same-source-frames=2

[source-config-0]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=2

[source-config-1]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=2

[source-config-2]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

[source-config-3]
## Set to ceil(current fps/minimum fps)
max-num-frames-per-batch=1

The problem I’m facing is that, after processing a few frames, the pipeline freezes—nothing gets processed, and the GPU usage drops to 0%.

Questions:

  1. Is there a known issue with the deepstream_parallel_inference_app when handling multiple RTSP sources with different frame rates?
  2. Could the configuration for streammux be causing the freeze?
  3. Any suggestions or potential fixes for handling this scenario?

Thank you for your help!

We have a FAQ to show you how to tune the new nvstreammux. You can refer to that tune your pipeline first.

I’ve already followed the instructions for tuning the new streammux, as you can see from the config_new_streammux.txt file I shared earlier:

However, even with this configuration, I’m still encountering the same issue: after a few frames, the pipeline freezes.

Upon reviewing the source code for the parallel_inference_app, I noticed that the same streammux configuration is applied not only to the first streammux but also to all the other streammuxes used within the parallel_infer_bin (where each branch has its own streammux).
Question: Is that correct? Or should each streammux have its own specific configuration?

New observation:
I removed the metamux element along with all elements after it, and added a fakesink at the end of each branch. I kept the same streammux configuration I shared earlier for all the streammuxes. In this case, the issue disappears and the pipeline runs without freezing.
This leads me to suspect that the issue might be related to the metamux rather than the streammux configuration.

Questions:

  1. Can the metamux handle multiple RTSP sources with different FPS?
  2. Is it possible that the metamux is causing the pipeline to freeze?

Thanks.

Currently, we use the same config file for all the nvstreammux. Because this part is open source, you can customize it to your own needs.

Yes

Yes. You can try to tune the pts-tolerance parameter of the nvdsmetamux.