Use Hardware rescaling of jetson nano in python codes

Hi all,
I used gstreamer+opencv python codes for hardware decoding of multi-streams and I want to know, Is it possible to use Video Image Converter(VIC) hardware with gstreamer?

gstream_elemets = (
                'rtspsrc location=rtsp latency=300 !'
                'rtph264depay ! h264parse !'
                'queue max-size-buffers=100, leaky=2 !'
                'omxh264dec enable-max-performance=1 enable-low-outbuffer=1 !'
                'video/x-raw(memory:NVMM), format=(string)NV12 !'
                'nvvidconv ! video/x-raw , width=450, height=450, format=(string)BGRx !'
                'videorate ! video/x-raw, framerate=(fraction)10/1 !'
                'videoconvert ! '
                'appsink'). 
cv2.VideoCapture(gstream_elemets, cv2.CAP_GSTREAMER)

the below element in gstreamer above used hardware for rescaling(VIC)?

‘nvvidconv ! video/x-raw , width=450, height=450, format=(string)BGRx !’

Hi,
Yes, it converts and scales the source frame to 450x450 BGRx in NVMM buffer, and then copy to CPU buffer.

1- Deep stream also use such way: copy from NVMM buffer into CPU buffer?
Because input rate is 25 and output rate is 10, and this cause bottleneck and gradually increased the memory, How to solve this problem? Of course, If I set leaky=2(drop old buffers) in queue, this problem solved but the decodes frames after a while incorrected, but when I set leaky=0(no leakage) that problem solved but start to increased gradually.

original frame:
Screenshot from 2020-06-09 17-28-13

decoded frame:

Screenshot from 2020-06-09 17-27-34

Hi,
Please try appsink sync=false. There is synchronization in gstreamer. Probably it triggers the issue. Please disable it and give it a try.

Thank.
I will check the way.
I set queue in the above, this queue is for decoding, right? Is it possible to limit and drop frames in the encoding queue, I don’t know, Is there queue for encoding or queue only defined for decoder buffer?
If queue also exist for encoding and the capacity of that queue in unlimited this cause increased gradually in memory.

queue max-size-buffers=100, leaky=2

Hi,
We would suggest not to set leaky. Not sure but looks like it may drop certain frames.

  leaky               : Where the queue leaks, if at all
                        flags: readable, writable, changeable in NULL, READY, PAUSED or PLAYING state
                        Enum "GstQueueLeaky" Default: 0, "no"
                           (0): no               - Not Leaky
                           (1): upstream         - Leaky on upstream (new buffers)
                           (2): downstream       - Leaky on downstream (old buffers)

We would suggest not to set leaky .

for encoder or both?

If your mean is to not set leaky for decoder also, because input frame rate is higher than output frame rate, this cause gradually increased in the memory, How to solve this problem?

Hi,
The issue looks to be in videorate. If your source is 30fps, the setting in videorate is 10fps and 1 second content will be played in 3 seconds. So the video frames get piling up. You can try nvv4l2decoder and configure

  drop-frame-interval : Interval to drop the frames,ex: value of 5 means every 5th frame will be given by decoder, rest all dropped
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 30 Default: 0

Thanks,
nvv4l2decoder is work in jetpack 4.2.2? this decoder exist in the gstreamer.

How do I can change a above gstreamer elelemt + opencv in python into nvv4l2decoder?

In the deepstream only used NVMM buffer, what’s difference between my solution (opencv + gstreamer) with deep stream plugin for decoding? with my solution I copy the NVMM buffer into CPU buffer, what’s disadvantage this way? cpu usage ? ram usage?

Hi,
There is a C sample of using nvv4l2decoder + opencv. Please check
https://developer.nvidia.com/embedded/L4T/r32_Release_v4.2/Sources/T210/public_sources.tbz2

But this code don’t work correctly.