Deepstream multiuri pipeline - inference desynchronization

Hello,

I’m working on some project based on deepstream. And I’m using python code. (I was also using some parts of deepstream python apps code)
The project generally involves processing data from multiple sources.

So I get data from many sources (prepared as GstBin with urisourcebins), then I’m passing the data through streammux, nvinfer (object detection) etc. In the end I’m using tiler and eglsink to get nice output to see all the results. You can take a look at the pipeline below.

Initially I have set streammux props like below:
batched-push-timeout: 40000
batch-size: 1

The pipeline is working great but the frame rate was kinda slow (max 60 fps - just 6 fps per camera) and I have seen some growing and growing delay.

I was sure that the model is kind a bottleneck - so I have turned of the inference and I have noticed that the problem is with streammux.
To increase framerate I have changed the batch-size to 10 (same like number of sources)
And the result was as expected. Got better frame rate.

But, when I have turned on the inference I have noticed that inference result (as bboxes) were drawn in wrong places.

Depends on batched-push-timeout and number of sources - the bboxes are jumping between different outputs, sometimes all bboxes are drawn on one output (It should be one bbox on every output in my case).
Looks like some data desynchronization.

I have found similar problem:
https://githubhot.com/repo/marcoslucianops/DeepStream-Yolo/issues/56

Could you help me with the problem?
BR!

Edit:

Hardware: GPU RTX A2000
DS: 6.0.1
TensorRT: 8.0.1-1
GPU: Driver Version: 470.86
CUDA Version: 11.4

Hi,

This issue looks like more related to Deepstream. We are moving this post to the Deepstream forum to get better help.

Thank you.

1 Like

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

I have just added the hardware settings in the first post.

Edit:
As I have noticed the problem is probably related with sinks.
I have added some probe function to different pads and yeah, results are right.
So there is just some problem with displaying the bboxes on frames. Like some problem with nvdsosd? Tiler?
I was tested eglsink and multifilesink as output - same problem.

And some picture of tiled output (just to be sure I have described the problem right way):

So as you can see above all the boxes are drawn in one frame.

The pipeline picture is not clear to identify the components in the pipeline. And we don’t know your configurations too. Can you reproduce the problem with deepstream-app?

I have run the example its working of course.
And I have found the solution.
As you can see in the pipeline, I have linked the tiler plugin after nvdsosd plugin.
Tiler should be before nvdsosd, and it’s how it’s done in the deepstream example.
Sorry, I did not notice this.

Glad to know you have resolved it.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.