High latency for running Gstreamer on opencv

When I read rtsp streams directly using components of Gstreamer, the latency was very low. Here is the code.

gst-launch-1.0 rtspsrc location=“rtspt://admin:wzu123456@” latency=0 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! xvimagesink sync=false

But what happens when I use Gstreamer in opencv and the latency gets high. How do I fix this?
Here is the opencv code and the associated errors.

import cv2
import queue
import time
import threading

q = queue.Queue()

pipeline = 'rtspsrc location=rtsp://admin:wzu123456@ latency=0 ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! appsink sync=false '

def Receive():
    print("start Reveive")
    cap = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)
    ret, frame = cap.read()
    while ret:
        ret, frame = cap.read()

def Display():
    print("Start Displaying")
    while True:
        if q.empty() != True:
            frame = q.get()
            cv2.imshow("frame", frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):

if __name__ == '__main__':
    p1 = threading.Thread(target=Receive)
    p2 = threading.Thread(target=Display)

(ocr) wzu@wzu-desktop:~/jjh/pythonProject14$ python gs.py
start Reveive
Start Displaying

(python:10266): GStreamer-CRITICAL **: 14:54:07.578: gst_caps_is_empty: assertion 'GST_IS_CAPS (caps)' failed

(python:10266): GStreamer-CRITICAL **: 14:54:07.578: gst_caps_truncate: assertion 'GST_IS_CAPS (caps)' failed

(python:10266): GStreamer-CRITICAL **: 14:54:07.578: gst_caps_fixate: assertion 'GST_IS_CAPS (caps)' failed

(python:10266): GStreamer-CRITICAL **: 14:54:07.578: gst_caps_get_structure: assertion 'GST_IS_CAPS (caps)' failed

(python:10266): GStreamer-CRITICAL **: 14:54:07.578: gst_structure_get_string: assertion 'structure != NULL' failed

(python:10266): GStreamer-CRITICAL **: 14:54:07.578: gst_mini_object_unref: assertion 'mini_object != NULL' failed
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Allocating new output: 2560x1440 (x 14), ThumbnailMode = 0
OPENMAX: HandleNewStreamFormat: 3605: Send OMX_EventPortSettingsChanged: nFrameWidth = 2560, nFrameHeight = 1440 
[ WARN:0] global /home/wzu/jjh/opencv/opencv-4.5.4/modules/videoio/src/cap_gstreamer.cpp (1063) open OpenCV | GStreamer warning: unable to query duration of stream
[ WARN:0] global /home/wzu/jjh/opencv/opencv-4.5.4/modules/videoio/src/cap_gstreamer.cpp (1100) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1
^CException ignored in: <module 'threading' from '/home/wzu/archiconda3/envs/ocr/lib/python3.8/threading.py'>
Traceback (most recent call last):
  File "/home/wzu/archiconda3/envs/ocr/lib/python3.8/threading.py", line 1388, in _shutdown

OpenCV needs frame data in CPU buffer so it needs to copy data from hardware DMA buffer to CPU buffer. This may dominate the performance. Please execute sudo jetson_clocks to run CPU cores at maximum clock.

Using this command did reduce the latency, but when I wanted to get a color display, the latency went up again. Here is the command I modified:

pipeline = "rtspsrc location=rtsp://admin:wzu123456@ latency=0 ! rtph264depay ! h264parse ! nvdec ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! appsink sync=false"

Is it because videoconvert is running on a CPU? What can I do to further reduce latency?

CPU capability of Jetson Nano is limited. You may consider not to use OpenCV. If you are developing deep learning use-case, can try DeepStream SDK.

I have recompiled opencv to support CUDA and enabled the option to support Gstreamer. Still not working?

If you would like to use some CUDA filters, may refer to the samples:
Is it possible to directly map the NvBuffer to a OpenCV GpuMat? - #4 by DaneLLL

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.