Conversion of Gstreamer launch to OpenCV pipeline for camera OV9281

Hi,
I’m trying to convert a gst-launch command to opencv pipeline.
Using the following gst-launch , I am able to launch the camera,

gst-launch-1.0 v4l2src device=“/dev/video0” ! “video/x-raw,width=1280,height=800,format=(string)GRAY8” ! videoconvert ! videoscale ! “video/x-raw,width=640,height=400” ! xvimagesink sync=false

Now I need to convert this into Opencv Pipeline. I tried but I always get the following error:

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (711) open OpenCV | GStreamer warning: Error opening bin: could not parse caps “video/x-raw, , format=(string)GRAY8”
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
libpng warning: Image width is zero in IHDR
libpng warning: Image height is zero in IHDR
libpng error: Invalid IHDR data

This is the pipeline I’m trying to run. I know openCV requires BGR format so,

camSet=‘v4l2src device=“/dev/video0” ! “video/x-raw,width=1280,height=800,format=(string)GRAY8” ! videoconvert ! “video/x-raw,width=640,height=400,format=BGRx” ! videoconvert ! video/x-raw, format=BGR ! appsink’

cam = cv2.VideoCapture(camSet, cv2.CAP_GSTREAMER)
_, frame = cam.read()
cv2.imwrite(‘test’ +‘.png’, frame)

cam.release()

Can anyone assist me with this?

The pipeline string should not have further quoting than start and end. Extra quoting is only required with gst-launch from shell when parenthesis are used.

The image scaling with CPU is not done by videoconvert but by videoscale. So your pipeline would be:

camSet='v4l2src device=/dev/video0 ! video/x-raw,width=1280,height=800,format=GRAY8 ! videoscale ! video/x-raw, width=640, height=400 ! videoconvert ! video/x-raw, format=BGR ! appsink'

You may also use HW scaling and conversion with nvvidconv on Jetson:

camSet='v4l2src device=/dev/video0 ! video/x-raw,width=1280,height=800,format=GRAY8 ! nvvidconv ! video/x-raw(memory:NVMM), format=I420, width=640, height=400 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink'
1 Like

I resolved it, forgot to update the question.
I used

camSet = ‘v4l2src device=/dev/video0 ! video/x-raw,width=1280,height=800,format=(string)GRAY8 ! videoconvert ! videoscale ! video/x-raw,width=640,height=400,format=BGR ! appsink’

It gives a warning message though but it works. Thanks for your help. Much appreciated.

Glad to see you’ve moved forward.

Your current pipeline converts many pixels into BGR before scaling into lower resolution. You may save some CPU load scaling first and converting with a lower number of pixels.
You would save even more CPU load with my second pipeline leveraging HW scaling and conversion.

1 Like

I see. I’d definitely try that tomorrow and let you know. Thanks

I did try your pipeline. It does work, however I get the same warning message. Anyway to resolve this or it can be ignored?

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1

This warning is normal… A live stream has no duration. You can safely ignore it.

1 Like

Hi @Honey_Patouceul ,
I tried to use the pipeline. But when I capture a frame, there is always a delay. I tried to even restrict the buffer to maximum 1 but still, there is a delay in capturing the latest frame. Here is the pipeline which I’m using at the moment

camSet=‘v4l2src device=/dev/video0 ! video/x-raw,width=1280,height=800,format=GRAY8, framerate=30/1 ! nvvidconv ! video/x-raw(memory:NVMM),’
’ format=I420, width=1280, height=800 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink max-buffers=1 drop=true’

Do you have any idea about this? Is this has to do something with latency?

You may tell what is your use case for better advice. Opencv doesn’t require BGR, it can also accept GRAY8 frames. Depends on what opencv algorithms you intend to use. But since all info is in monochrome, if the algorithms you want to use are available for monochrome in opencv, this would be the best solution.

Also try to add option io-mode=2 to your v4l2src plugin, it may help in some cases;

camSet='v4l2src device=/dev/video0 io-mode=2 ! ...'

Its just I’m doing a feature detection in the monochrome. In OpenCV I’m just using findcontour function to detect contour in the image. I think I can directly pass GRAY8 format to OpenCV. That might remove the delay in capturing the frame.

So you would just try:

camSet='v4l2src device=/dev/video0 ! video/x-raw,width=1280,height=800,format=GRAY8, framerate=30/1 ! appsink'

// Or
camSet='v4l2src device=/dev/video0 io-mode=2 ! video/x-raw,width=1280,height=800,format=GRAY8, framerate=30/1 ! appsink'

If this doesn’t work, you may try to add videoconvert in between.

You would then read 1 channel cv:Mat with videoCapture read.

1 Like

Hey, i am using the same camera. In Terminal I get the video displayed correctly.

If I want to display the camera image with gstreamer in opencv the function isOpened() returns False.

Do you have any advice for me?

Hi p.herder,

Please help to open a new topic if still an issue.

Thanks