get RGB frame data from nvvidconv gstreamer 1.0

My gstreamer pipeline is:

gst-launch-1.0 v4l2src device="/dev/video1" ! "video/x-raw, format=(string)I420, width=(int)1920, height=(int)1080" ! nvvidconv ! "video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)RGBA" ! MyFilter ! nvoverlaysink -v

“MyFilter” is custom transform plugin which i buid for my own, but i don’t know how to get video frame from “nvvidconv”. Does anyone know how to get video frame from nvvidconv?

When i use “videoconvert”, i can get YUV data and correct size of one YUV frame. But with “nvvidconv”, size of data is very small (like size = 744). I want to get data in transform function like this:

static GstFlowReturn
gst_myfilter_transform (GstBaseTransform * base, GstBuffer * inbuf, GstBuffer * outbuf)
{
  GstMapInfo in_info;
  GstMapInfo out_info;
  int nRow;

  gst_buffer_map(inbuf, &in_info, GST_MAP_READ);
  gst_buffer_map(outbuf, &out_info, GST_MAP_WRITE);

   g_print ("in_info.size = %d\n", in_info.size);

  gst_buffer_unmap(inbuf, &in_info);
  gst_buffer_unmap(outbuf, &out_info);

  return GST_FLOW_OK;
}

Hi moondev,
The output of nvvidconv is not CPU-accessible buffer, that’s why you see the error.

So in your case, you need to do YUV to RGBA conversion, do post processing on RGBA buffer, and then render out?
Please share detail so that we can give suggestion.

Thank DaneLLL.
As you described, I want to do post processing on RGBA buffer and then render out.
I did it by following nvsample_cudaprocess example. This is new gstreamer pipeline:

gst-launch-1.0 v4l2src device="/dev/video1" ! "video/x-raw, format=(string)I420, width=(int)1920, height=(int)1080" ! nvvidconv ! "video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420"  ! nvivafilter cuda-process=true customer-lib-name="libnvsample_cudaprocess.so" ! "video/x-raw(memory:NVMM), format=(string)RGBA" ! nvoverlaysink -e -v

I added my CUDA code in gpu_process() function and it worked.

But i have another problem when i want to display both original image and processed image side-by-side. Because nvoverlaysink seems to be only choices if i want to render RGBA buffer with above pipeline, then processed image is displayed in full screen and i don’t have any place to display original image.

Did i miss anything to solve this problem?

If you have any suggestion, please help me.

Hi moondev,
Please try nveglglessink
export DISPLAY=:0
gst-launch-1.0 v4l2src device="/dev/video0" ! “video/x-raw, format=(string)I420, width=(int)640, height=(int)480” ! nvvidconv ! “video/x-raw(memory:NVMM), format=(string)I420” ! tee name=nt nt. ! queue ! nvegltransform ! nveglglessink window-x=100 window-y=900 nt. ! queue ! nvtee ! nvivafilter cuda-process=true customer-lib-name=“libnvsample_cudaprocess.so” ! “video/x-raw(memory:NVMM), format=(string)RGBA” ! nvegltransform ! nveglglessink window-x=100 window-y=100 -e

Thank you very much.
I used your pipeline and it worked for me. so great.