Displaying to the screen with OpenCV and GStreamer

When using opencv and imshow, I find that there is significant delay (I assume due to uplscaling) I would like to avoid imshow if possible. So I am trying to use GStreamer with nveglglessink. My code as is Python:
gst = "appsrc ! video/x-raw,format=RGBA,width=1920,height=1080 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA,width=1920,height=1080' ! nvegltransform ! nveglglessink -e "
vw = cv2.VideoWriter(gst, cv2.CAP_GSTREAMER, 0, 30, (DISPLAY_WIDTH, DISPLAY_HEIGHT))

Whenever I try to launch this code I get the errror: [ WARN:0] global /usr/local/include/opencv-4.3.0/modules/videoio/src/cap_gstreamer.cpp (1424) open OpenCV | GStreamer warning: error opening writer pipeline: syntax error

I can’t find any problems with the pipeline. Any suggestions?

Python 3.6.9
OpenCV 4.3 with CUDA and GStreamer (I checked)
GStreamer 1.14.5

Hi,
Please refer to the thread:

Generally we save it to a file when calling cv2.VideoWriter(). Not sure if it works if you run with a display sink. May see if others can share suggestion.

You may try to remove the single quotes and ‘-e’ from your pipeline.

@DaneLLL I am aware that this isn’t the normal use case, but I want to give it a shot see if it improves performance over OpenCV imshow.

@Honey_Patouceul I removed those and the stream generation error went away, but now there is a appsrc internal data stream error, and ideas? Thanks!

In case your opencv frames are BGR, you would have to convert in the pipeline.
I am currently unable to try, but this may help:

gst = "appsrc ! queue ! videoconvert ! video/x-raw,format=RGBA ! nvvidconv ! nvegltransform ! nveglglessink "
vw = cv2.VideoWriter(gst, cv2.CAP_GSTREAMER, 0, 30, (FRAME_WIDTH, FRAME_HEIGHT))

Note the width and height (and fps) should be according to the frames you’re pushing into writer.
When it works, you can resize with nvvidconv adding caps with new size:

gst = "appsrc ! queue ! videoconvert ! video/x-raw,format=RGBA ! nvvidconv ! video/x-raw(memory:NVMM), width=DISPLAY_WIDTH, height=DISPLAY_HEIGHT ! nvegltransform ! nveglglessink "
2 Likes

@Honey_Patouceul Thanks! That works!

It appears that the stream is faster than using imshow, although still has a bit of latency (still not sure if that is from my code or not). Using imshow the framerate is uneven and slower than I would like, so this is better for my use case.

Hi,
1- How I can use HW accelerate for encoding using gstreamer + opencv in python for writing to file?
2- How I can use HW accelaretor for decoding unsing gstreamer + opencv in python with nvv4l2decoder?
3- All codes of decoding and encoding are same and correctly wotrk for both Nano and Xavier NX?

Hi,
Please refer to the python sample:

import sys
import cv2

def read_cam():
    cap = cv2.VideoCapture("filesrc location=/home/nvidia/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink  ")

    w = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
    h = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
    fps = cap.get(cv2.CAP_PROP_FPS)
    print('Src opened, %dx%d @ %d fps' % (w, h, fps))

    gst_out = "appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc ! h264parse ! matroskamux ! filesink location=test.mkv "
    out = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, float(fps), (int(w), int(h)))
    if not out.isOpened():
        print("Failed to open output")
        exit()

    if cap.isOpened():
        while True:
            ret_val, img = cap.read();
            if not ret_val:
                break;
            out.write(img);
            cv2.waitKey(1)
    else:
     print "pipeline open failed"

    print("successfully exit")
    cap.release()
    out.release()


if __name__ == '__main__':
    read_cam()

The sample follows Honey_Patouceul’s suggestion and works fine. FYR.

1 Like
Utilising GPU to perform real-time .JPG compression in Python
What is the most efficient way to record video or save images?
How do I initiate an image capture and a 30 sec video recording on the python script
Save video with OpenCV and usb-webcamera
Why when i run pipeline on OpenCV-Python do not work correctly
Encode a video stream, and send the video stream over the network
Jets on nano not saving video file properly
Jetson nano python program freezes on the line cv2.VideoWriter.write(frame)
Cap_gstreamer.cpp file not found
Issues with raspberry pi camera on jetson nano in headless mode
RTSP stream with openCV: Missing installation?
Writing h264 video using GST pipeline with OpenCV using NVENC engine
Opencv & gstreamer capture and record
Error compiling for opencv4.5.5 with cuda acceleration on JetsonNano (cudaVideoCodec_AV1 was not declared)
Recorded with Gstreamer videos compression question on Jetson Xavier NX
Using gstreamer convert image/raw to rtsp in cv2.VideoWriter on python
Saving video and snapshot-preview images with gst
Stream image with gstreamer and python over tcp
How to use gstreamer to scale a RGB picture
Jetson nano with opencv with libuvc videowriter not working
OpenCV video write Help on Nano
Xavier NX OpenCV fullscreen window performance
OpenCV with CUDA on jetson 2GB
No element "omxh264enc"
When opencv captures gstreamer video data, black screen!
Multiple RTSP cameras using GStreamer to a python OpenCV array on Jetson Nano?
SVO to mp4 export speed up on Nvidia Jetsons
Problems trying to stream a video with Jetson Nano
Gstreamer OpenCv Appsink to Appsrc Link Problem
Jetson nano ,opencv ,Gstreamer ,h265 ,mp4
How to use Python CV2 to record uncompressed video with Gstreamer using appsink and appsrc?
OpenCV Mat to Gstreamer pipeline with filesink option writing files of only 336 bytes
The encoding capability of Orin Nano at 1080p30 may impact the real-time capture of a 1080p60 USB camera
No element "omxh264enc"
GStreamer warning: Error pushing buffer to GStreamer pipeline
Codex Issue on Jetson Orin NX Developer Kit
Failure to use gst-launch-1.0 nvarguscamerasrc to capture image
Encountering an NVBuffer error when sending frames to the AppSrc element
Frame lost while saving videos from camera using jetson nano B01
H.264 encoding/decoding
Jetpack 6.2 RTSP Stream Enable Nvidia Decoder
Running `gst-inspect-1.0 nvv4l2h264dec` will show that there is no such element as `nvv4l2h264dec`
Jetson Nano Gstreamer Saving Video
Jetson Nano change value of usb cam contrast, saturation
Save v4l2 camera video as mp4 using Gstreamer, opencv, Python
Encoding and decoding 4k videos on jetson xavier using opencv
How to write images to SD card faster?
Capturing a stream from raspberry via Gstreamer
Nvv4l2h265enc internal data stream error
Jetson Nano USB devices no longer working after modifying xorg.conf
How to pass to hardware encoder from OpenCV
When the hdmi not be connected, nvdrmvideosink double free happen
Recording video from camera

@DaneLLL
Thanks a lot.
what’s mean when we use :
video/x-raw, format=BGR and video/x-raw(memory:NVMM),format=(string)NV12
In this program appsrc > video/x-raw, format=BGR that means data in appsrc pushed into video/x-raw, format=BGR? Or this element is for converting?
In my opion appsrc is CPU buffer and both video/x-raw, format=BGR and video/x-raw(memory:NVMM),format=(string)NV12 are GPU buffer?

Hi,
Because OpenCV uses BGR CPU buffers and hardware encoder takes NVMM buffers, need to convert the buffers through videoconvert ! nvvidconv.