Hello,
I’m working on some project based on deepstream. And I’m using python code. (I was also using some parts of deepstream python apps code)
The project generally involves processing data from multiple sources.
So I get data from many sources (prepared as GstBin with urisourcebins), then I’m passing the data through streammux, nvinfer (object detection) etc. In the end I’m using tiler and eglsink to get nice output to see all the results. You can take a look at the pipeline below.
Initially I have set streammux props like below:
batched-push-timeout: 40000
batch-size: 1
The pipeline is working great but the frame rate was kinda slow (max 60 fps - just 6 fps per camera) and I have seen some growing and growing delay.
I was sure that the model is kind a bottleneck - so I have turned of the inference and I have noticed that the problem is with streammux.
To increase framerate I have changed the batch-size to 10 (same like number of sources)
And the result was as expected. Got better frame rate.
But, when I have turned on the inference I have noticed that inference result (as bboxes) were drawn in wrong places.
Depends on batched-push-timeout and number of sources - the bboxes are jumping between different outputs, sometimes all bboxes are drawn on one output (It should be one bbox on every output in my case).
Looks like some data desynchronization.
I have found similar problem:
https://githubhot.com/repo/marcoslucianops/DeepStream-Yolo/issues/56
Could you help me with the problem?
BR!
Edit:
Hardware: GPU RTX A2000
DS: 6.0.1
TensorRT: 8.0.1-1
GPU: Driver Version: 470.86
CUDA Version: 11.4