Problem getting gstreamer pipeline working with OpenCV and Python

List,
I am having a problem running gstreamer pipeline from Python 3 via OpenCv. I do not know gstreamer very well although I am comfortable with opencv. My issues stem from my ignorance not actual issue with Jetson TX2. I am working with Jetpack 4.4, running OpenCV 4.1.1 with gstreamer (unchanged build from Jetpack), and the Python version I am using is Python 3.6.9. On the hardware side I am using a devkit with Jetson TX2. The camera I am using is an ELP USB camera. The dev kit is headless. With this setup I am able to take pictures, record video, and stream video from the Jetson using gstreamer directly. My problem arises when I attempted to use it in python. Right now I am using a very simple python script to open my camera and stream video to another computer on the same network. My goal is to latter use a neural network I made to do detection, add bounding boxes, and pipe the post processed video via g streamer to another computer. I have tested a number of pipelines on command line and in general they work fine. When I add the pipeline to Python add attempt to use the same pipeline it errors out.
For example I am using these pipelines

On the Jetson/server
gst-launch-1.0 v4l2src device=/dev/video0 ! ā€˜video/x-raw, format=YUY2ā€™ ! nvvidconv ! ā€˜video/x-raw(memory:NVMM), width=640, height=480ā€™ ! omxh264enc ! h264parse ! rtph264pay config-interval=1 ! udpsink host=192.168.86.26 port=5000
Client/laptop
gst-launch-1.0 udpsrc port=5000 ! ā€˜application/x-rtp, encoding-name=H264, payload=96ā€™ ! rtph264depay ! h264parse ! avdec_h264 ! xvimagesink

This works fine although very low frame rate.
Now if I try to do the following python it wont work. My goal is to get the output from my YOLO4 implementation to stream as HD264.

import cv2
cap = cv2.VideoCapture(0)

framerate = 25.0
gst_str =ā€˜video/x-raw, format=YUY2 ! nvvidconv ! video/x-raw(memory:NVMM), width=640, height=480 ! omxh264enc ! h264parse ! rtph264pay config-interval=1 ! udpsink host=192.168.86.26 port=5000ā€™

out = cv2.VideoWriter(gst_str, 0, framerate, (640, 480))

while cap.isOpened():
ret, frame = cap.read()
if ret:

out.write(frame)

if cv2.waitKey(1) & 0xFF == ord('q'):
    break

else:
break

cap.release()
out.release()

I know this is an issue with how I am implementing this but I am sort of clueless on how to integrate gstreamer with opencv. I have done some looking around and looking at some tutorials, but none were helpful. I would appreciate any assistance even if it is pointing me to a similar example somewhere.

Hi,
For running deep learning inference, we have DeepStream SDK. Please check

https://forums.developer.nvidia.com/t/announcing-developer-preview-for-deepstream-5-0/121619
And python samples:

Please take a look and suggest develop your usecase based on the reference samples.

1 Like

DaneLLL, thank you for the links but I am aware of them and I have already created the inference engine. What I am looking for is to pass video to OpenCV videowriter which would have a gstreamer string that starts a pipeline that is feed by the modified stream from VIdeoCapture which is post processed further by OpenCV then those frames are written to the gstreamer pipeline. Does that clear up what I am looking for? I would appreciate any assistance.

Hi,
We donā€™t have much experience on using cv2.VideoWriter() and may not be able to give proper suggestion. Could you please go to OCV forum?

You can ask with software encoder such as x264enc, and see if users can share suggestion and guidance. If there is a working pipeline, the encoder can be replace with nvv4l2h264enc to enable hardware acceleration.

2 Likes

First you would check the mode used for capture when using V4L API:

cap = cv2.VideoCapture(0)
w = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
h = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
fps = cap.get(cv2.CAP_PROP_FPS)
print('Src opened, %dx%d @ %d fps' % (w, h, fps))

If this is not the expected mode, you may also try a gstreamer pipeline (you would adjust resolution and framerate according to your camera):

cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! video/x-raw, format=YUY2, width=640, height=480, framerate=30/1 ! videoconvert ! video/x-raw, format=BGR ! appsink " , cv2.CAP_GSTREAMER)
w = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
h = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
fps = cap.get(cv2.CAP_PROP_FPS)
print('Src opened, %dx%d @ %d fps' % (w, h, fps))

For the writer to udpsink, it should be:

gst_out = "appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw, format=RGBA ! nvvidconv ! video/x-raw(memory:NVMM), width=640, height=480 ! omxh264enc insert-vui=true insert-sps-pps=1 ! h264parse ! rtph264pay config-interval=1 ! udpsink host=127.0.0.1 port=5000 "
out= cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, float(fps), (int(w), int(h)))
if not out.isOpened():
    print("Failed to open output")
    exit()
1 Like

Thank you for your response. I will try your response out.

Honey,

After playing with the code a bit this seems to work just fine

cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! video/x-raw, format=YUY2, width=640, height=480, framerate=30/1 ! videoconvert ! video/x-raw, format=BGR ! appsink " , cv2.CAP_GSTREAMER)
w = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
h = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
fps = cap.get(cv2.CAP_PROP_FPS)
print(ā€˜Src opened, %dx%d @ %d fpsā€™ % (w, h, fps))

I am having issues here
gst_out = "appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw, format=RGBA ! nvvidconv ! video/x-raw(memory:NVMM), width=640, height=480 ! omxh264enc insert-vui=true insert-sps-pps=1 ! h264parse ! config-interval=1 ! udpsink "
out= cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, float(fps), (int(w), int(h)))
if not out.isOpened():
print(ā€œFailed to open outputā€)
exit()

My full code is:

import time
import cv2

cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! video/x-raw, format=YUY2, width=640, height=480, framerate=30/1 ! videoconvert ! video/x-raw, format=BGR !videoconvert !videoscale ! appsink " , cv2.CAP_GSTREAMER)

w = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
h = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
fps = cap.get(cv2.CAP_PROP_FPS)
print(ā€˜Src opened, %dx%d @ %d fpsā€™ % (w, h, fps))

gst_out = "appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw, format=RGBA ! nvvidconv ! video/x-raw(memory:NVMM), width=640, height=480 ! omxh264enc insert-vui=true insert-sps-pps=1 ! h264parse ! config-interval=1 ! udpsink "

out= cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, float(fps), (int(w), int(h)))

if not out.isOpened():
print(ā€œFailed to open outputā€)
exit()

while True:
ret, frame = cap.read()
if ret is True:
frame = cv2.flip(frame, 1)
out.write(frame)
else:
print(ā€œCamera error.ā€)
time.sleep(10)

cap.release()

When I run this I get

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
Src opened, 640x480 @ 30 fps

(python3:21373): GStreamer-CRITICAL **: 14:55:55.975: gst_element_link_pads_filtered: assertion ā€˜GST_IS_BIN (parent)ā€™ failed

(python3:21373): GStreamer-CRITICAL **: 14:55:55.979: gst_element_link_pads_filtered: assertion ā€˜GST_IS_BIN (parent)ā€™ failed

(python3:21373): GStreamer-CRITICAL **: 14:55:55.981: gst_element_link_pads_filtered: assertion ā€˜GST_IS_BIN (parent)ā€™ failed
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1422) open OpenCV | GStreamer warning: error opening writer pipeline: syntax error

(python3:21373): GStreamer-CRITICAL **: 14:55:55.981: gst_bus_have_pending: assertion ā€˜GST_IS_BUS (bus)ā€™ failed
Failed to open output

I have also tried running the pipeline
appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw, format=RGBA ! nvvidconv ! video/x-raw(memory:NVMM), width=640, height=480 ! omxh264enc insert-vui=true insert-sps-pps=1 ! h264parse ! config-interval=1 ! udpsink

using videotestsrc and it complains of syntax error.

Any thoughts?

Sorry, I made an error when writing the writer pipeline. Plugin rtph264pay was missing. Iā€™ve edited my post for correcting that.
Also note that you may have to set host and port options of udpsink.

1 Like