Temporal Batching single source

Is there any way of using temporal batching from a single video file?
I have seen answers that contradict (see this and this)

I am trying to extract batches from a video source, do inference and stream that output using RTSP. So from my understanding if my streammx config is as follows:

[streammux]
gpu-id=0
batch-size=2
batched-push-timeout=-1
width=1280
height=720
enable-padding=1

And my prymary gie:

[primary-gie]
enable=1
gpu-id=0
batch-size=2
gie-unique-id=1
interval=0
config-file=detector.txt

This should be working. Not sure if streaming would be possible? My actual sink is:

[sink0]
enable=1
type=4
codec=1
sync=0
bitrate=4000000

And it seems that when I increase batch size, output video is sipping some frames. So I am not sure if my problem is only related with RTSP streaming or if batches are still of size 1 even given my actual configuration.

• Hardware Platform (Jetson / GPU) Xavier NX
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only)

Hi,
The batch-size in [streammux] has to match with source number. Do you mean you would like to do inference in interval? In this case, you should set interval in [primary-gie].

Hi, no I would like get frames in groups of batch_size from a single source. Is that possible?

Hi,
Not quite sure about the usecase. Do you mean run like tee:

$ gst-launch-1.0 videotestsrc num-buffers=100 ! nvvideoconvert ! tee name=t t. ! queue ! nvoverlaysink t. ! queue ! nvv4l2h264enc ! filesink location=a.h264

In the pipeline, the single source is sent to overlaysink and video encoder simultaneously. If it is not the usecase, please share more detail.