Streams are adding with delay in Batch Meta by running deepstream-imagedata-multistream

  • Hardware Platform (Jetson / GPU) = Jetson Orin Nx
  • DeepStream Version = 6.3.0
  • JetPack Version (valid for Jetson only) = 35.6.0
  • TensorRT Version = 8.5.2.2
  • CUDA Version (valid for GPU only) = 11.4
  • Issue Type( questions, new requirements, bugs) = Question

Hi, I am running the sample code of multistream (deepstream-imagedata-multistream) provided by deepstream 6.3. the pipeline for this code is this:
uridecodebin → streammux → Pgie → nvvidconv1 → filter1 → tiler → nvvidconv → OSD → fakesink
Question: I am able to add 8 sources simultaneously but If I try to add more than 8 streams then it takes time 20-30 mins in the batch Meta.
These are the logs where I am running the program for 11 streams and initially Batch Meta has only 1 stream then takes almost 15 mins to add another stream in Batch Meta

I there any way where I could run all the streams simultaneously or takes less time to add the streams in Batch Meta.
Thankyou.

What kind of sources? Local files or live streams?

What is the nvstreammux configurations and the nvinfer configurations with your 11 sources case?

I am uaing RTSP H264 encoded as Sources and these are the NvStreammux and nvinfer configuration.

NvStreammux :

   streammux.set_property('live-source', 1)
   streammux.set_property('width', 1280)
   streammux.set_property('height', 720)
   streammux.set_property('batch-size', number_sources)
   streammux.set_property('batched-push-timeout', 4000000)

NvInfer :

   [property]

gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel
proto-file=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=11
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=1

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.7
minBoxes=1

#Use the config params below for dbscan clustering mode
[class-attrs-all]
detected-min-w=4
detected-min-h=4
minBoxes=3

Per class configurations

[class-attrs-0]
pre-cluster-threshold=0.05
eps=0.7
dbscan-min-score=0.95

[class-attrs-1]
pre-cluster-threshold=0.05
eps=0.7
dbscan-min-score=0.5

[class-attrs-2]
pre-cluster-threshold=0.1
eps=0.6
dbscan-min-score=0.95

[class-attrs-3]
pre-cluster-threshold=0.05
eps=0.7
dbscan-min-score=0.5

Please refer to DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

hey @Fiona.Chen, thanks for replying I tried changing the batched-push-timeout and NVSTREAMMUX_ADAPTIVE_BATCHING which has increased the fps of the streams but my issue is still the same, all 11 streams are taking 30-45 minutes to add to the pipeline. Is there any other configuration I can tweak ?

We don’t know how did you add the streams.

I am using raspberrypi global shutter camera as source and I am adding rtsp link of those camera as an argument using this command
python3 deepstream_imagedata-multistream.py rtsp://10.99.17.158:8554/test rtsp://10.99.17.166:8554/test rtsp://10.99.17.173:8554/test rtsp://10.99.17.179:8554/test rtsp://10.99.17.137:8554/test rtsp://10.99.17.153:8554/test rtsp://10.99.17.213:8554/test rtsp://10.99.17.225:8554/test rtsp://10.99.17.235:8554/test rtsp://10.99.17.191:8554/test rtsp://10.99.17.195:8554/test frames

I am using reference from deepstream-imagedata-multistream sample from python-apps.
here is the source code and config file.
deepstream-imagedata-multistream.zip (56.9 KB)