Can nvvidconv load a frame from CPU buffer to NVMM buffer in Gstreamer pipeline?

Hello!

I am quite new to Gstreamer, I have coded a framegrabber, say MySrc, and so far I was writting to the CPU buffer. I am not using one of the provided framegrabbers (like v4l2src) because my camera is not UVC. The following pipeline works

gst-launch-1.0 MySrc ! videoconvert ! ximagesink

Now I was trying to do the same with nvvidconv and nvoverlaysink, but the following does not work:

gst-launch-1.0 MySrc ! 'video/x-raw, format=(string)RGBA' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvoverlaysink -e

All I get it a red flickering screen :/ I am wondering whether I missed something; I have seen that there is this NvBuffer that access (memory:NVMM) in L4T and I saw there was this

#include "gst/allocators/gstdmabuf.h"

in Gstreamer, but I am a bit inexperienced when it comes to buffer.

If anyone could let me know I’d appreciate!

You may try to use verbose mode with videoconvert in order to see what format is used:

gst-launch-1.0 <b>-v</b> MySrc ! videoconvert ! ximagesink

Note the format used as output (SRC) of MySrc. Should be a line such as:

/GstPipeline:pipeline0/MySrc:MySrc0.GstPad:src: caps ="video/x-raw\,\ format\=\(string\)..."

Is it RGBA format or another one ? If another one, is supported as input for nvvidconv ? You may check with:

gst-inspect-1.0 nvvidconv

and check for its input (SINK) capabilities.
You may also check same way which formats MySrc claims to provide with:

gst-inspect-1.0 MySrc

You may also activate gstreamer debug, but be aware that increasing debug level may result in a huge amount of messages:

GST_DEBUG=*:3 gst-launch-1.0 MySrc ! 'video/x-raw, format=(string)RGBA' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvoverlaysink -e

Be also sure to have a monitor connected and maybe a session opened for using nvoverlaysink.

Hi PasToutSeul!
Thanks for your support. When I run

gst-launch-1.0 -v spinnakersrc ! videoconvert ! ximagesink

I get the following from the verbose bit:

nvidia@xavier-emb:~$ gst-launch-1.0 -v spinnakersrc ! videoconvert ! ximagesink
(gst-launch-1.0:10213): GStreamer-CRITICAL **: 11:12:17.716: gst_clock_get_time: assertion 'GST_IS_CLOCK (clock)' failed
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstSpinnakerSrc:spinnakersrc0.GstPad:src: caps = video/x-raw, format=(string)RGBA, width=(int)2736, height=(int)1824, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)sRGB, framerate=(fraction)0/1
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:src: caps = video/x-raw, width=(int)2736, height=(int)1824, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)0/1, format=(string)BGRx
/GstPipeline:pipeline0/GstXImageSink:ximagesink0.GstPad:sink: caps = video/x-raw, width=(int)2736, height=(int)1824, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)0/1, format=(string)BGRx
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:sink: caps = video/x-raw, format=(string)RGBA, width=(int)2736, height=(int)1824, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)sRGB, framerate=(fraction)0/1

When I run

gst-launch-1.0 -v spinnakersrc ! nvvidconv ! nvoverlaysink -e

I get

nvidia@xavier-emb:~$ gst-launch-1.0 -v spinnakersrc ! nvvidconv ! nvoverlaysink -e
nvbuf_utils: Could not get EGL display connection
(gst-launch-1.0:10560): GStreamer-CRITICAL **: 11:18:08.409: gst_clock_get_time: assertion 'GST_IS_CLOCK (clock)' failed
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
/GstPipeline:pipeline0/GstSpinnakerSrc:spinnakersrc0.GstPad:src: caps = video/x-raw, width=(int)2736, height=(int)1824, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)0/1, format=(string)RGBA, colorimetry=(string)sRGB
New clock: GstSystemClock
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)2736, height=(int)1824, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)0/1, format=(string)RGBA
/GstPipeline:pipeline0/GstNvOverlaySink-nvoverlaysink:nvoverlaysink-nvoverlaysink0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)2736, height=(int)1824, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)0/1, format=(string)RGBA
In gst_spinnaker_src_create:
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:sink: caps = video/x-raw, width=(int)2736, height=(int)1824, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)0/1, format=(string)RGBA, colorimetry=(string)sRGB

I somehow don’t think that the “nvbuf_utils: Could not get EGL display connection” is the problem, because I am also not able to do compression on the gpu, directly to a file. Any advice?

So as an update, what seems to be working (weirdly) is the following, I am not getting the red frame of death.

...stuff... ! videoconvert ! 'video/x-raw, format=(string)I420' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! ...stuff...

For some reason when I try to convert directly with nvvidconv, it does not work. I am wondering why that is…
Cheers!

I think it’s rather a nvoverlaysink issue than nvvidconv issue.

If you’re running GUI, nvoverlaysink may have issue with some formats/resolutions.
In such case, you may use EGL backend instead:

gst-launch-1.0 -v spinnakersrc ! nvvidconv ! nvegltransform ! nveglglessink

Alternately, you may also not login into GUI, switch to console (Alt-Ctrl-F2 for example), login and try your original pipeline ending with nvoverlaysink there. This works for me on R31.1 simulating your camera with videotestsrc.
No idea what conversion videoconvert makes, but it will be very slow for your resolution anyway.

Hmmmm for me the issue does not seem to be “nveglglessink”, because when I take the “videoconvert” off, then I am not able to encode either (omxh263enc encodes into black frames). Eventually I will try to write to NVMM memory directly. Is that any code somewhere that shows how to write to NVMM memory directly?

I also notice now that your plugin output framerate is 0/1.
Is your device only taking a snapshot ? Is so you may try to insert plugin imagefreeze after your plugin in the pipeline.

I’d suggest you test the rest of pipeline with videotestsrc, and try your plugin when it works.
I don’t know your use case, but for example this should work when logged in GUI:

gst-launch-1.0 -v videotestsrc is-live=true ! video/x-raw,format=RGBA,width=2736,height=1824,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvegltransform ! nveglglessink

gst-launch-1.0 -v spinnakersrc ! imagefreeze ! video/x-raw,format=RGBA,width=2736,height=1824,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvegltransform ! nveglglessink

Hi Patouceul!
The pipeline with videotestsrc works, but the one with spinnakersrc still gives me a black screen.
The spinnakersrc plugin is a “naive” camera frame-grabber I wrote, it produces RGB or RGBA frames (uncompressed) at approx 30 fps.
I have also done some naive timestamping, for instance the output from of this, looks like this:

gst-launch-1.0 -v spinnakersrc ! fakesink silent=false
/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = chain   ******* (fakesink0:sink) (19961856 bytes, dts: none, pts: 0:00:04.892243872, duration: none, offset: -1, offset_end: -1, flags: 00000000 , meta: none) 0x7f38008230

I wonder whether maybe nvvidconv is not working because I do not set some properties, but videoconvert sets those fields automatically?

Although the following:

gst-launch-1.0 -v spinnakersrc ! videoconvert ! 'video/x-raw, format=(string)NV12' ! fakesink silent=false

Gives this

/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = chain   ******* (fakesink0:sink) (7485696 bytes, dts: none, pts: 0:00:05.555283712, duration: none, offset: -1, offset_end: -1, flags: 00000000 , meta: GstVideoMeta) 0x7f680301e0

Except from the size of the frame in memory, I do not see a difference. Any thoughts?

My use-case is really simple, I just want to pipe through the frames from the camera onto the pipeline. The reason I had to write a new grabber was that my camera is not UVC.

Finally with the image freeze, I get this output

gst-launch-1.0 -v spinnakersrc ! imagefreeze ! fakesink silent=false
/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = chain   ******* (fakesink0:sink) (19961856 bytes, dts: none, pts: 0:22:02.920000000, duration: 0:00:00.040000000, offset: 33073, offset_end: 33074, flags: 00000040 discont , meta: none) 0x7f380099c0

No idea what videoconvert does, but further than blocksize, I also see some metadata added with videoconvert from your log. Not sure how much this is relevant for this issue.
I’d suspect something wrong in your ‘naive’ plugin rather than in nvvidconv (although I cannot tell for sure), so you may reach gstreamer dev forum. Provide full system info and commands and full output.

I haven’t looked in details into this nor even tried it (looks a bit old), but this seems to provide access to many cameras with similar case as yours, so you may have a look to it.

So what seems to work now, is that I do a conversion into I420 inside my plugin but output into cpu buffer, using NVIDIA 2D Image And Signal Performance Primitives (NPP): RGBToYUV420, and then nvvidconv accepts my input smoothly (without the need of having an extra videoconvert inbetween).