The number of identified objects decreases as the number of cameras rises

Hi,
With my custom trained yolov5m model, I was utilizing the sample deepstream-imagedata-multistream. Each object I found was being saved to a local folder. 5496 photos were saved after I played a single video. The software was run a second time with the identical video coming from two separate video sources. Every object that is recognized is intended to be saved into the appropriate folder. But the quantity of images fell in half. The quantity of photos decreased unevenly as I increased the number of video sources.
What may be the cause of that? Can these frames be skipped when the GPU use increases?
ThankYou,
Chaki

NVIDIA RTX2070• Hardware Platform (Jetson / GPU)**
6.1• DeepStream Version**
8.2.5.1• TensorRT Version**

could you provide the nvinfer config file, plus the nvstreammux configuration info

NVINFER config file

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=new_face_yolov5.cfg
model-file=new_face_yolov5.wts
model-engine-file=model_b6_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=4
network-mode=0
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=4
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
pre-cluster-threshold=0

nvstreammux configuration:

streammux = Gst.ElementFactory.make(“nvstreammux”, “Stream-muxer”)
if not streammux:
sys.stderr.write(" Unable to create NvStreamMux \n")
if is_live:
print(“Atleast one of the sources is live”)
streammux.set_property(‘live-source’, 1)

streammux.set_property(‘width’, 1920)
streammux.set_property(‘height’, 1080)
streammux.set_property(‘batch-size’, number_sources)
streammux.set_property(‘batched-push-timeout’, 4000000)
pipeline.add(streammux)

seem strange. Could you provide a reproducible code?

could you provide simple code to reproduce this issue? Using deepstream native sample code , configuration file, model, can you reproduce this issue?

Currently I am trying to run the Deepstream-python example: deepstream_test_3.py and GPU util is 96%.

I can’t reproduce this issue locally, here is the log:
log.txt (6.0 KB)
do you still meet any deepstream issue? please provide simplified code to reproduce this issue.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.