Nvvidconv colorspace conversion difficulties

I am attempt to use a gstreamer pipeline to pick up frames from nvarguscamerasrc, convert them to rgba, and grab them for processing at the application level using OpenCV.

I have two problems:

  1. nvvidconv appears to have some strange undocumented constraints on its behaviour.

This pipeline:

gst-launch-1.0 nvarguscamerasrc ! capsfilter caps="video/x-raw(memory:NVMM),width=(int)4032,height=(int)3040,format=(string)NV12,framerate=(fraction)30/1" ! nvvidconv ! capsfilter caps="video/x-raw(memory:NVMM),format=(string)RGBA" ! autovideosink

… crashes with:

ERROR: from element /GstPipeline:pipeline0
/GstNvArgusCameraSrc:nvarguscamerasrc0: Internal data stream error
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0
/GstNvArgusCameraSrc:nvarguscamerasrc0:
streaming stopped, reason not-linked (-1)
Execution ended after 0:00:02.980244275

But when we scale the video down to an arbitrary undocumented limit I found by trial and error:

gst-launch-1.0 nvarguscamerasrc ! capsfilter caps="video/x-raw(memory:NVMM),width=(int)4032,height=(int)3040,format=(string)NV12,framerate=(fraction)30/1" ! nvvidconv ! capsfilter caps="video/x-raw(memory:NVMM),format=(string)RGBA,width=3344,height=2508" ! autovideosink

… we have a working preview.

Why?

  1. Behaviour appears to differ between “gst-launch-1.0”, and the VideoCapture object

The stream above, while functioning just fine from gst-launch, fails if we run it through an OpenCV VideoCapture object to an appsink:

string pipeline = "nvarguscamerasrc ! capsfilter caps=\"video/x-raw(memory:NVMM),width=(int)4032,height=(int)3040,format=(string)NV12,framerate=(fraction)30/1\" !  nvvidconv ! "
		" capsfilter caps=\"video/x-raw(memory:NVMM),format=(string)RGBA,width=3344,height=2508\" ! appsink";

GSTcam::GSTcam(ViewPt screensize):CameraBase(screensize), VideoCapture(pipeline, cv::CAP_GSTREAMER)
{
// do stuff
}

… this doesn’t work at all, with the unhelpful message:

[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module nvarguscamerasrc0 reported: Internal data stream error.
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

“Internal data stream error” isn’t much to go on.

Neither gstreamer nor opencv is actually necessary to my work… I simply need to grab argus frames at full resolution, convert them to RGBA, and pass them to my code as a raw buffer at 30 fps.

But documentation is… sparse.

Can anyone point me in the direction of something helpful?

Hi,
You may try nveglglessink or nvoverlaysink:

$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=4032,height=3040,framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvegltransform ! nveglglessink sync=0

In using nvoverlaysink, it sends buffers to display directly. IF your TV does not 4K, please downsclae it to supported resolution(such as 1080p):

$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM)width=4032,height=3040,framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA,width=1920,height=1080' ! nvoverlaysink sync=0

Interesting thought, but not the answer.

If we exclude nvvidconv (and don’t do colorspace transformation), the whole thing works regardless:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=4032,height=3040,framerate=30/1' !  nvoverlaysink 

…works fine… so the issue is with nvvidconv.

If we use “nvegltransform ! nveglglessink” , we can get an actual picture from this one, however:

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=4032,height=3040,framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvegltransform ! nveglglessink sync=0

But that doesn’t help in the long run, because I need to pipe to appsink, and the manual says:

“nvegltransform: Video transform element for NVMM to EGLimage
(supported with nveglglessink only)”

And indeed, it crashes when I try to pipe to appsink.

We also have:

“nvvideosink: Video Sink Component. Accepts YUV-I420 format and
produces EGLStream (RGBA)”

… which sounds promising, camera frames in RGBA is exactly what I want. Except… it’s a sink. It doesn’t pipe to anything. Where does it go? How do I use it? The documentation is utterly silent on this.

Hi,
RGBA is not supported in appsink in OpenCV. Please send buffers in I420(or NV12) or BGR. There are python samples for reference:

Ah, thank you, yes… that makes sense.

However, my entire goal in constructing a gstreamer pipeline was to have camera frames, as RGBA data, available to my application. So perhaps I was attempting to use the wrong tool.

If gstreamer is not sufficiently flexible for this task at all, perhaps I would be better served using libargus directly to grab camera frames into GPU memory, convert them into RGBA there, and pass a handle back to my application?

The real end goal here is to have camera frames as SFML textures.

Hi,
There are samples demonstrating gstreamer + cv::gpuMat and jetson_multimedia_api + cv::cpuMat. Please take a look:

See if you can apply it to your usecase.