Is there any way of using temporal batching from a single video file?
I have seen answers that contradict (see this and this)
I am trying to extract batches from a video source, do inference and stream that output using RTSP. So from my understanding if my streammx config is as follows:
[streammux] gpu-id=0 batch-size=2 batched-push-timeout=-1 width=1280 height=720 enable-padding=1
And my prymary gie:
[primary-gie]
enable=1
gpu-id=0
batch-size=2
gie-unique-id=1
interval=0
config-file=detector.txt
This should be working. Not sure if streaming would be possible? My actual sink is:
[sink0]
enable=1
type=4
codec=1
sync=0
bitrate=4000000
And it seems that when I increase batch size, output video is sipping some frames. So I am not sure if my problem is only related with RTSP streaming or if batches are still of size 1 even given my actual configuration.
• Hardware Platform (Jetson / GPU) Xavier NX
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only)