Question regarding "batched-push-timeout" in nvstreammux

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only): 4.6
• TensorRT Version: 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
I have a question regarding property “batched-push-timeout” in nvstreammux after reading this post NVSTREAMMUX batched-push-timeout value/calculation

It looks like “batched-push-timeout” should be set according to fps from the source.
Our UDP sources stream 30 fps videos but nvinfer is inferencing half of the frames (''interval=1"). Our fastest frame processing is around 4 ms(the frame without inference) and the worst one is around 110 ms (the frame with inference).

Our current “batched-push-timeout” is set as 25000 microsecs and I see a lot of frames in batches are missing. But given half of the frames are efficient enough within 10 ms due to “interval = 1”, do I need to increase the 25000 or not?

Thanks in advance

please set batched-push-timeout to 1/max_fps, please refer to Troubleshooting — DeepStream 6.1.1 Release documentation

Thanks. In my case, since interval = 1 setting from nvinfer(i.e. half of the frames are passing the inference engine), does it affect setting the value of batched-push-timeout=1/max_fps? Shall I consider the max_fps as 15 fps and set batched-push-timeout to 66666 microsecs?

no, batched-push-timeout is used to nvstreammux’s generating batch, interval is used to nvinfer’s inference.

why do you do this?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.