USB cam runs successfully with gst-launch, but doesn't run wtth OpenCV

Hi. I’m trying to use USB cam with Jetson Nano 2GB.
micro SD source: jetbot-042_nano-2gb-jp441.zip
All tests are executed on Jupyter notebook which provided by above image.

First, I tested my gst pipeline with gst-lanunch:

gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw, width=320, height=240 ! nvvidconv ! video/x-raw(memory:NVMM), width=224, height=224, format=BGRx ! videoconvert ! fakesink
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, width=(int)320, height=(int)240, framerate=(fraction)30/1, format=(string)YUY2, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:5:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, width=(int)320, height=(int)240, framerate=(fraction)30/1, format=(string)YUY2, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:5:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)224, height=(int)224, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)4/3, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)224, height=(int)224, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)4/3, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)224, height=(int)224, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)4/3, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstFakeSink:fakesink0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)224, height=(int)224, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)4/3, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)224, height=(int)224, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)4/3, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)224, height=(int)224, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)4/3, interlace-mode=(string)progressive, format=(string)BGRx
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:sink: caps = video/x-raw, width=(int)320, height=(int)240, framerate=(fraction)30/1, format=(string)YUY2, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:5:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, width=(int)320, height=(int)240, framerate=(fraction)30/1, format=(string)YUY2, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)2:4:5:1, interlace-mode=(string)progressive
^Chandling interrupt.
Interrupt: Stopping pipeline …
Execution ended after 0:00:03.925704186
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …

Next, I transported it to Jupyter notebook with cv2 but failed:

import cv2
gst_str = ‘v4l2src device=/dev/video0 ! video/x-raw, width=320, height=240 ! nvvidconv ! video/x-raw(memory:NVMM), width=224, height=224, format=BGRx ! videoconvert ! appsink’
cap = cv2.VideoCapture(gst_str, cv2.CAP_GSTREAMER)
re, image = cap.read()
print(re)

false

Finally, I tried to write pipeline without nvvidconv and successfully get an expected image:

gst_str = ‘v4l2src device=/dev/video0 ! video/x-raw, width=320, height=240 ! videoscale ! video/x-raw, width=224, height=224 ! videoconvert ! appsink’
(only this line differs)
true
display_cv_image(image)
–> Image is displayed as expected

Can anyone tell me why my pipeline doesn’t work with cv2? I’m afraid performance problem without nvvidconv.

Hi,
You don’t need nvvidconv since it only supports video/x-raw buffers in OpenCV. You should run a pipeline like this:
Sony camera module cannot be opened with OpenCV on Xavier

Oops, is the nvvidconv not needed in this case?
By the way, why my first (gst-launch) version runs correctly?

Hi,
Since fakesink accepts any capability, the first gst-launch pipeline works and the capability is video/x-raw(memory:NVMM), width=224, height=224, format=BGRx. For hooking with OpenCV, the capability has to be video/x-raw,format=BGR.

1 Like

Thanks! I understand that fakesink is differ from appsink in acceptability.