Concept of elements of gstreamer for decoding the stream

Hi all,
I want to know about gsteamer for decoding the streaming. Please correct my mistake if there.
Suppose we want to use gsteamer + opencv in python of decoding the rtsp stream like this:

gstream_elemets = (
‘rtspsrc location=rtsp latency=300 !’
‘rtph264depay ! h264parse !’
‘queue max-size-buffers=100, leaky=2 !’
‘omxh264dec enable-max-performance=1 enable-low-outbuffer=1 !’
‘video/x-raw(memory:NVMM), format=(string)NV12 !’
‘nvvidconv ! video/x-raw , width=450, height=450, format=(string)BGRx !’
‘videorate ! video/x-raw, framerate=(fraction)10/1 !’
'videoconvert ! ’
‘appsink’).
cap = cv2.VideoCapture(gstream_elemets, cv2.CAP_GSTREAMER)

1- The structural of the elements like this ?
source > decoder > transforms > sink
between these elements, there is a queue for each elements?
the source put the data into decoder or put into it’s queue?
what’s difference between source > queue > decoder … and source > decoder > … ?

2- If the input frame rate = 25 and I want to capture only every 5th frame of input frames, What do I do?

one solution is that to manage with time.sleep(1/5) function, like this :

while True:
        ret, frame = cap.read()
        time.sleep(1/5)

This solution has two problem, one is because the input frame rate is high than output frame rate, this cause gradually increased the memory problem and system after a while the crash.
two is we decode all of input frames of stream but I want to decode only every 5th frames of input source. How I can drops some of input frames?

How to solve the problem, specially problem of gradually increased the memory?
using videorate in gsteamer, this cause very increase in the cpu usage.

3- Because the appsink only support cpu buffer, and the decoder frames are in the NVVM buffer, using of gsteamer + opencv then use the memory usage twice?

  1. For Gstreamer queue, please refer to https://gstreamer.freedesktop.org/documentation/coreelements/queue.html?gi-language=c and https://gstreamer.freedesktop.org/documentation/application-development/advanced/buffering.html
  2. Does your “decode” mean to use gstreamer decode plugin to decompress the compressed video streams? Decoder can not skip input frames because there are reference frames for decoding. Nvv4l2decoder plugin can skip output frames with “drop-frame-interval” property. For other decoder plugins, it is better for you to write your own plugin to handle this problem.
  3. The videoconvert plugin will convert "video/x-raw(memory:NVMM) " buffer to “video/x-raw” buffer. it is hardware accelerated.

How I can see the codes of Nvv4l2decoder plugin?

The Nvv4l2decoder plugin is not open source plugin. You may need to develop your own plugin for dropping frames only.

So why this is correctly work for file source? for RTSP is not open source?

It is not filesrc help the case, it is mp4 file which contains frame rate helps the videorate work. But RTSP can not provide framerate because it is live streaming. If you are interested. Please refer to videorate plugin source codes, qtdemux plugin source code, mp4 file format spec, AVC spec, HEVC spec, RTSP spec, RTP spec, …

RTSP src and sink plugins are open source. Please refer to https://gstreamer.freedesktop.org/.