I am trying to using the nvvidconv plugin to perform efficient colorspace conversion between I420 and RGBA from a USB webcam producing JPEG frames. The output will eventually be sent to an opencv ROS plugin. Right now I’m just trying to get the image into non NVMM memory after using nvvidconv for the colorspace conversion.
Example pipeline of what I’d like to do (that does not work):
gst-launch-1.0 -vvv -e v4l2src device=/dev/video0 do-time-stamp=true ! image/jpeg, width=1280, height=720, framerate=60/1 ! nvjpegdec ! nvvidconv ! video/x-raw, format=(string)RGBA ! videoconvert ! xvimagesink sync=false
The only way I’v been able to get something to work is by placing an “video/x-raw” between nvjpegdec and nvvidconv and using a gpu based video sink such as nvoverlaysink or “nvegltransform ! nveglglessink”
The commands below render images from the camera however I don’t know how to get them from there into my application.
nvoverlaysink:
gst-launch-1.0 -vvv -e v4l2src device=/dev/video0 do-time-stamp=true ! image/jpeg, width=1280, height=720, framerate=60/1 ! nvjpegdec ! video/x-raw ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)RGBA ! nvoverlaysink sync=false
egl:
gst-launch-1.0 -vvv -e v4l2src device=/dev/video0 do-time-stamp=true ! image/jpeg, width=1280, height=720, framerate=60/1 ! nvjpegdec ! video/x-raw ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)RGBA ! nvegltransform ! nveglglessink
I cannot pipe straight from nvvidconv to an xvimagesink as is suggested in the Multimedia User Guide in multiple places. I noticed that in the Guide there is always an openmax plugin before the nvvidconv plugin in those examples, which makes me wonder if that is not coincidental.
In the end that means that the below command doesn’t work even with using video/x-raw between nvjpegdec and nvvidconv.
gst-launch-1.0 -vvv -e v4l2src device=/dev/video0 do-time-stamp=true ! image/jpeg, width=1280, height=720, framerate=60/1 ! nvjpegdec ! video/x-raw ! nvvidconv ! video/x-raw, format=(string)I420 ! xvimagesink sync=false
I have also tried not specifying the colorspace output to xvimagesink and using a videoconvert between nvvidconv and xvimagesink just to see if something would show up.
My end goal would be to have njpegdec and nvvidonv use NVMM to pass images and have nvvidconv pass out in the video/x-raw, format=RGBA format that can then be used in the ROS/OpenCV application.
The gstreamer videoconvert plugin can be used in the place of the nvvidconv plugin but it results in high cpu usage.
All help is greatly appreciated.