The branch configuration in deepstream_parallel_inference_app caused the RTSP streams to stop

**• Hardware Platform (GPU) RTX 3060
**• DeepStream Version 7.0
• TensorRT Version
**• NVIDIA GPU Driver Version (valid for GPU only) CUDA12.2
• Issue Type (questions)

I found that if I use 3 RTSP streams to run deepstream_parallel_inference, and the src-ids for the branches are not set to 0;1;2 as shown below:

branch0:
  ## pgie's id
  pgie-id: 999
  ## select sources by sourceid
  src-ids: 1;2

branch1:
  ## pgie's id
  pgie-id: 888
  ## select sources by sourceid
  src-ids: 0;1

The RTSP streams will stop completely after running for a while.

It must be set as follows:

branch0:
  ## pgie's id
  pgie-id: 999
  ## select sources by sourceid
  src-ids: 0;1;2

branch1:
  ## pgie's id
  pgie-id: 888
  ## select sources by sourceid
  src-ids: 0;1;2

in order to maintain a continuous connection.

The test results show that the src-ids must be set to include all the sources in order for the streams to continue running.

What is the metamux configuration for your case?

This is my profile.
source4_1080p_dec_parallel_infer.txt (8.1 KB)

streammux:
  gpu-id: 0
  ##Boolean property to inform muxer that sources are live
  live-source: 1
  buffer-pool-size: 3
  batch-size: 3
  ##time out in usec, to wait after the first buffer is available
  ##to push the batch even if the complete batch is not formed
  batched-push-timeout: 1000
  ## Set muxer output width and height
  width: 1920
  height: 1080
  ##Enable to maintain aspect ratio wrt source, and allow black borders, works
  ##along with width, height properties
  enable-padding: 0
  nvbuf-memory-type: 0
  drop-pipeline-eos: 1