How to calculate batched_push_timeout value?

• Hardware Platform (Jetson / GPU) NVIDIA A2
• DeepStream Version 6.3
• TensorRT Version 8.4.0
• NVIDIA GPU Driver Version (valid for GPU only) 535.129.03

I am using a Deepstream pipeline in Python and inferring it with nvinfersever with rtsp streams of 60 fps.

Now, I am trying to calculate batched-push-timeout property in streammux. I could find more than one way to calculate it. Some people say 1/max_fps and others say 1000000/max_fps I think they are the same somehow. Well, the problem is when I use these values, some of the frames get processed and the others are delayed. for example, some cameras process on frame #10, and two or three cameras are delayed processing on frame #7.

This causes issues in our tracker service since we are doing multi-cam tracking, so clustering objects isn’t working correctly.

When I remove that property if a stream gets down or something, my whole pipeline hangs, maybe it waits for the full batch. So, how can I set it correctly?

Note: I am using 30 streams. If it makes any difference in calculating the value of it.

Thank you.

Please refer to DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

I read that topic many times, but still, the same behavior I mentioned happens.

From your description, seems your cameras are not aligned. Have you checked that the timestamps of the frames? Are the timestamp aligned between the cameras?

Is there any variable in the metadata of the frame that holds the timestamp of it?

No. Please check the timestamp of the frames before nvstreammux.

Would you tell me how can I do it? or if there’s a sample doing that.

I mean, how can I access the streams/frames before nvstreammux? Should I do it inside Deepstream?

There is timestamp in GstBuffer. Please write code to read them.
DeepStream is based on GStreamer. GstBuffer (gstreamer.freedesktop.org)

Okay, I am not sure if it matters that I get the timestamps before or after nvstreammux. But, I could get the ntp_timestamp in NvDsFrameMeta from pgie probe function (same as deepstream_test_3). And the batched_push_timeout is set to 20000 since my sources are 60 fps. And here’s a snapshot from the output:

Although, the differences are very small, but as you can see each source processes different frame.

The nvstreammux align the inputs by their timestamps, so it is correct that the batch contains the frames with similar timestamps.

Okay, I get it!
As I understand, if there are 32 sources, nvstreammux tries to form a batch of 32 frames. Is it correct?
If it is set to 20000 for example (since fps is 60) and also, I assume 20ms is enough to form a batch as described. As you can see in the screenshot the batch has different frames which isn’t logical. I assume the whole batch should have the same frame ID.

Yes.

The batch tries to get the frames with close timestamps in the time of batched_push_timeout

Okaaaayyyy. So let’s assume we have a frame with timestamp X it should be batched with all frames with timestamps between [X - batched_push_timeout, X] inclusive.

If it’s current, let me know how to check the timestamps before nvstreammux to reproduce it in my python code.

You are using gst-python bindings to write python pipeline, it is just a normal GStreamer pipeline. I’ve mentioned “There is timestamp in GstBuffer. Please write code to read them.”

Python GStreamer Tutorial (brettviren.github.io)
How to launch Gstreamer pipeline in Python - LifeStyleTransfer
GstBuffer (gstreamer.freedesktop.org) --Please get “pts”.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

There is description of how nvstreammux constructs the batch. Gst-nvstreammux — DeepStream documentation 6.4 documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.