Streaming an openCV capture with gstreamer in Python

Trying to stream a video through gstreamer with a python code:

My python code is:
import cv2
import numpy as np

cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)

#gst_out = “appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! video/x-raw(memory:NVMM),format=NV12 ! nvv4l2h264enc maxperf-enable=1 insert-sps-pps=1 ! h264parse ! rtph264pay pt=96 ! queue ! application/x-rtp, media=video, encoding-name=H264 ! udpsink host=192.168.43.208 port=5000”
#gst_out = ‘appsrc ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink host=127.0.0.1 port=5000’
#gst_out = ‘appsrc ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink host=127.0.0.1 port=5000’
gst_out = “appsrc ! video/x-raw, format=BGR, pixel-aspect-ratio=1/1 ! queue ! videoconvert ! video/x-raw, format=BGRx ! nvvidconv ! nvv4l2h264enc insert-vui=1 ! video/x-h264, stream-format=byte-stream, alignment=au ! h264parse ! video/x-h264, stream-format=byte-stream ! rtph264pay pt=96 config-interval=1 ! application/x-rtp, media=video, encoding-name=H264 ! udpsink host=224.1.1.1 port=5000 auto-multicast=true”

#out = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, 20, (640,480), True)
out = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, (640,480))
#out = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, 20, (640,480))

if not out.isOpened() :
print(“Writer failed”)
exit()

print(“Writer opened”)

while True:
ret,frame = cap.read()
if not ret:
break

out.write(frame)

if (cv2.waitKey(1) & 0xFF) == 27:
break # esc to quit

out.release()
cap.release()

#video.release()
cv2.destroyAllWindows()

tried several different pipelines and cv2.VideoWriter() combinations, as you can see with the comment outs, receiving always this error:
Traceback (most recent call last):
File “/home/xbio/software/Cam2_5.py”, line 15, in
out = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, (640,480))
TypeError: must be real number, not tuple

Edit: Discovered that my openCV installation has no gstreamer support, at the moment trying to figure out how to install gstreamer support for openCV

Hi,
The default OpenCV 4.1.1 enables gstreamer. So you may manually install another version. For enabling gstreamer(or CUDA), you can refer to this script:

Hi @DaneLLL ,

Many thanks

I run that script, now it seems I have the gstreamer support, but pylint inside Visula Studio Code, generating an error for every member of cv2, for example for cv2.destroyAllWindows saying "Module ‘cv2’ has no ‘destroyAllWindows’ member

Many thanks @DaneLLL ,

Is it possible to call and use test-launch inside Python, or call gstreamer to send video with RTSP (possible integrating it with openCV)? I’m trying to stream two videos to the same UDPsink host, which doesn’t work at the client side.

Can show one of the videos but not the other, receive the below error:

NvxBaseWorkerFunction[2575] comp OMX.Nvidia.std.iv_renderer.overlay.yuv420 Error -2147479552 
nvdc: open: Too many open files
nvdc: failed to open '/dev/tegra_dc_1'.

Hi,
It might be easier to run UDP in python OpenCV. Please take a look at

Hi @DaneLLL ,

Installing openCv through the above script:

[AastaNV/JEP/blob/master/script/install_opencv4.5.0_Jetson.sh ]

for enabling the gstreamer support, on AGX, my openCV loses the ability to set camera properties like width, height, and frame rate:

cam1.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cam1.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
cam1.set(cv2.CAP_PROP_FPS, 60)

which is working perfectly well when I install back openCV with:

sudo apt-get install python3-opencv

Is there a solution for this or is it possible to add only gstreamer support into current openCV installation?

Thanks,
Sefa

Hi
The OpenCV 4.1.1 installed through SDKManager has gstreamer enabled(WITH_GSTREAMER=ON). If you don’t need CUDA filters, you can use the default one instead of doing manual installation.

Thanks @DaneLLL

Last time I installed JetPack 4.4.2 on AGX, I didn’t notice that openCV was already installed, and I run apt-get install, as Python gave an error not finding the openCV. For Python 3, pip3 and openCV 4.1.1, do I need to to install any of these after SDK, or do I need to set any environment variables. Does it matter which version of the SDK?

Hi,
We are not sure which version is good for the usecase. But for running this sample, the package installed through SDKManager should be good enough.

Hi,

I mean the question is
“Does all versions of Jetpack contain the openCV 4.1.1 by default?”
Regards

Hi,

Yes. It is OpenCV4.1.1 for r32 releases.

Hi @DaneLLL,

I installed an AGX from the start.

Now I have the openCV 4.1.1 which comes as default.

I have the issue of
.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
not working.

I can set the camera resolution with v4l2-ctl directly on linux:

$v4l2-ctl -d /dev/video0 --set-fmt-video=width=1280,height=720

when I execute:

$ v4l2-ctl -d /dev/video0 --get-fmt-video

I get the result:

Format Video Capture:
Width/Height : 1280/720
Pixel Format : ‘UYVY’
Field : None
Bytes per Line : 2560
Size Image : 1843200
Colorspace : Default
Transfer Function : Default (maps to Rec. 709)
YCbCr/HSV Encoding: Default (maps to ITU-R 601)
Quantization : Default (maps to Limited Range)
Flags :

But with a python code:

import cv2
import os
import numpy as np

cam1 = cv2.VideoCapture(0)

#Print the resolution before set
print("\n")
print(“Before Set:”)
#print("fps = ", cam1.get(cv2.CAP_PROP_FPS))
print("Width = ", cam1.get(cv2.CAP_PROP_FRAME_WIDTH))
print(“Height = “, cam1.get(cv2.CAP_PROP_FRAME_HEIGHT))
print(”\n”)

cam1.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cam1.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
#cam1.set(cv2.CAP_PROP_FPS, 30)

#Print the resolution after set
print(“After Set:”)
#print("fps = ", cam1.get(cv2.CAP_PROP_FPS))
print("Width = ", cam1.get(cv2.CAP_PROP_FRAME_WIDTH))
print(“Height = “, cam1.get(cv2.CAP_PROP_FRAME_HEIGHT))
print(”\n”)

while True:

ret_val1, img1 = cam1.read()

cv2.imshow(‘1’, img1)

if cv2.waitKey(1) == 27:
break # esc to quit

#video.release()
cam1.release()
cv2.destroyAllWindows()

When I run the python code:

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1

Before Set:
Width = 3840.0
Height = 2160.0

VIDIOC_S_FMT: failed: Device or resource busy
After Set:
Width = 3840.0
Height = 2160.0

Gtk-Message: 17:17:54.126: Failed to load module “canberra-gtk-module”

And this is not the case, when I install openCv again (I guess it installs openCV 3. something?), openCV set frame size commands work perfectly.

Hi,
We would suggest run a gstreamer pipeline instead of cv2.videoCapture(0). Here is a sample:
How to Filesink and Appsink simultaneously in OpenCV Gstreamer. (Gstreamer pipeline included) - #7 by DaneLLL
See if it works in this method.

Hi @DaneLLL ,

I’m getting this warning message running exactly the above sample:

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1

could you point me to some python examples without using the openCV?

This warning is normal, a live feed has no duration so current position cannot be computed.

If you just want to stream without processing, you don’t need opencv, you can just use gstreamer.

First try to get you camera displayed (assuming you’re running with GUI):

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, format=UYVY, width=1280, height=720 ! xvimagesink

If ok, you can stream as RTP/UDP multicast (available to any host on LAN (with multicast on port 5000 allowed) with:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, format=UYVY, width=1280, height=720 ! nvvidconv ! nvv4l2h264enc insert-vui=1 ! h264parse ! rtph264pay config-interval=1 ! udpsink host=224.1.1.1 port=5000 auto-multicast=true

If you prefer using RTSP, you would install package gst-rtsp-server, build test-launch example, and then run:

./test-launch "v4l2src device=/dev/video0 ! video/x-raw, format=UYVY, width=1280, height=720 ! nvvidconv ! nvv4l2h264enc insert-vui=1 ! h264parse ! rtph264pay config-interval=1 pt=96 name=pay0"

Many thanks

I need to stream after processing.

Is there an alternative to openCV for processing the stream?

Currently I’m streaming using UDP, through appsink. Even just simply reading the camera with openCV, and streaming through from appsink is causing around 2 secs of latency, compared to directly streaming from the camera. Any advice ?

You may better explain your use case and what processing you want to perform for better advice.
Also note that opencv imshow may not be efficient on jetson and may be the bottleneck. You may try a VideoWriter instead (searching this forum you may find various examples of videowriters).