Nvstreammux and nvinfer initialization

I have run into issues running nvstreammux and nvinfer gstreamer plugins with many streams.
I noticed that if first batch returned by streammux was not full, pipeline would hang.
My work around was ultimately to increase MUXER_BATCH_TIMEOUT_USEC to be sure that my first batch would be full.
Are you aware of such issues?

Which platform are u using? could you share the env and details about how to repro your issue?

I am on a xavier, using rtsp streams.
my pipeline looks like this
rtspsrc → rtp264depay → h264parse → nvv4l2decoder
:::
::: --------------------------------------------------------- nvstreammux → nvinfer ->fakesink
:::
rtspsrc → rtp264depay → h264parse → nvv4l2decoder /

I ran my pipeline with GST_DEBUG=nvstreammux:7 and realized that streammux not pushing a first full batch was 100% correlated with the pipeline stopping after 1st frame infered. The exact same configuration would run sometimes and sometimes stop after first iteration, and this would depend on the first batch pushed by the muxer to be full or not.

If I increase the muxer timeout, the first batch is full and pipeline works correctly consistently.

Since it is RTSP source it is possible that batch is not full and u should specify batch-push-timeout based on the maximum frame rate. In addition U need to set live-source=1

Hi,
I am using the live-source True parameter indeed. I’d want to set batch-push-timeout to 40ms (because my streams run at 25fps) but if I do so, I observe that the muxer does not receive a full batch for the first pushed batched and then the pipeline will be stuck.

If I increase the batch-push-timeout, first batch will be full and the pipeline will proceed without problem.
Is it possible that there is some memory allocation that happens when first batch is received inside the muxer and blocks bigger batch from being returned later on or something around these lines ?

Did you manage to reproduce my issue?

Yes, our internal team is trying to repro.

Let’s close this topic since we had tracked the issue on the bug and already closed the bug.

So pls create a new topic or bug if you still need further help per the discussion on the bug.