Hi all,
I ran into an issue similar to this one: DeepStream pipeline blocks when queueing video buffers - #3 by Fiona.Chen
In summary, I need to buffer the last n frames, and when a specific object is detected on one of the current frame, I need to write a small video file of n+m frames where m is the number of frames after the event happened.
For the sake of example let n :=100. I would like to use a queue for caching the last n frames: ... ! queue min-threshold-buffers=100 max-size-buffers=101 leaky=2 ! ...
This hangs, as mentioned in the other thread the HW based video decoder only has a bufferpool of 4. I can increase it to a maximum of 4+24 (with the num-extra-surfaces property of the decoder as it was suggested in the other thread) but 28 is far from 100. So what is the correct solution here, how can I create a buffer of 100 frames?
So a frame (1920*1080 with 4 channels) takes about 7MB. If I have to cache 100 of them, that is around 700 MB. AFAIK the AGX has 16GB Memory, shared between CPU and GPU, so it should fit. Do I miss something?
My current pipeline looks more or less like that (I removed the uninteresting elements such as queues, and other branches of the tee): source bin - nvstreammux - nvinfer - tee - MyCacheElement - nvvideoconvert - capsfilter - nvv4l2h265encoder - h265parse - qtmux - filesink
The MyCacheElement is my buffer bin that at the moment contains the earlier mentioned queue ( queue min-threshold-buffers=100 max-size-buffers=101 leaky=2) and a valve element. Is there anything wrong with my approach?
Update: if I put MyCacheElement after the encoder, it works. However in that case I invoke nvv4l2h265encoder on frames, which I will drop later. I do not know yet, how much is the penalty in that case, I will measure it later.
Most of deepstream plugins is to accelerate processing with Nvidia hardware. The hardware needs HW buffers to transfer data. So the buffers are allocated and managed by GstBufferPool (gstreamer.freedesktop.org) mechanism inside the plugin.