H.264 Hardware decoder with Gstreamer+opencv+python

Hi all,
I have a problem with the H.264 RTSP stream decoder in a Jetson nano, I will be very glad to guide me.
I want to use a multi-stream rtsp 1080 using hardware decoder of a Jetson nano using gstreamer + opencv + python.
My jetson nano has:
jetpack 4.2.2 + opencv 3.4.6( installed from source)

When I used the code below, My cpu usage became high, but the decoded frames were corrected.

gstream_elemets = (
                'rtspsrc location={} latency=300 !'
                'rtph264depay ! h264parse !'
                'queue max-size-buffers=100, leaky=2 !' ## leaky=2 (drop old buffers)
                'omxh264dec enable-max-performance=1 enable-low-outbuffer=1 !'
                'video/x-raw(memory:NVMM), format=(string)NV12 !'
                'nvvidconv ! video/x-raw , format=(string)BGRx !'
                'videorate ! video/x-raw, framerate=(fraction){}/1 !'
                'videoconvert ! '
                'appsink'). \
                format(url, latency, framerate)
 cap = cv2.VideoCapture(gstream_elemets, cv2.CAP_GSTREAMER)

then I decided to control the frame rate output with this : and I comment the ‘videorate ! video/x-raw, framerate=(fraction){}/1 !’ from above.

while cap.isOpened():
         frame = cap.read()
         sleep(1. / framerate)

With this way the cpu usage is dropped but I faced with incorrected decode frames, I attached the correct decode frame and incorrect decode frame. and I solved these problems(drop cpu usage and correct the decode frames) with leaky=0 (no leakage) but this cause a new problem as increased the gradually the memory, in my opinion, this is because the bottleneck in input and output frame rate,

Q1- How to solve this problem?
Q2- Is it possible to use deepstream-python-apps for decoding multi-stream for this part of my codes?

Hi,
We don’t suggest run gstreamer+OpenCV in multi RTSP sources. Since there is limited CPU capability on Jetson Nano, performance may not be expected.
For python sample, you may try

The source is uridecodebin and shall run well with RTSP sources(need to set correct URI).

1- In the python samples of deep stream, for multi RTSP streaming, You used gstreamer + multi-thread, right? what’s difference between my solution and your?

2- How I use the your multi-stream RTSP decoding codes in my python codes? Is it possible?

3- Is it possible to run custom detection with sample python codes of deep stream?
4- You suggested that I don’t use gstreamer+OpenCV in multi RTSP sources. Since there is limited CPU capability on Jetson Nano,, So the cpu where to be used?

Hi,

We pass hardware NVMM buffers in the pipline and don’t use CPU buffers. It is optimla solution on Jetson platforms.

Please refer to deepstream-test3

Yes, you can convert the model to be run on TensorRT.

Please run sudo tegrastats to get system status. If you have hit performance issue, you may check which hardware component possibly caps performance.