Which kind of surface layout does the nvvidconv use?

Hi all,

I am developing a GStreamer plugin which requires to support NVMM. I am able to do a pass-thru of the buffers in NVMM in the GStreamer plug-in. However, when I am trying to apply OpenGL operations on the buffer, they are not being applied onto the image. I have seen that pitched memory is not supported and I am just wondering if the NVMM buffers that the nvvidconv sends are in pitched or block-linear. The pipeline is the following:

gst-launch-1.0 videotestsrc is-live=true ! 'video/x-raw, width=320, height=240, format=RGBA' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=320, height=240' ! myeglelement  ! nvvidconv ! 'video/x-raw, width=320, height=240,format=RGBA'  !  videoconvert ! xvimagesink sync=false display=localhost:10.0

I had made a proof-of-concept with allocating DMA bufs within the element in the following fashion:

int dmabuf_fd;
NvBufferCreate(&dmabuf_fd, width, height, NvBufferLayout_BlockLinear,  NvBufferColorFormat_ABGR32);
EGLImageKHR egl_image;
PFNGLEGLIMAGETARGETTEXTURE2DOESPROC EGLImageTargetTexture2DOES;

/* inFrame is the data packet from a userspace buffer */
Raw2NvBuffer(inFrame, 0, width, height, dmabuf_fd);

egl_image = NvEGLImageFromFd (NULL, dmabuf_fd);

/* Context if in f */

f->glBindTexture (GL_TEXTURE_2D, texture ());

EGLImageTargetTexture2DOES = (PFNGLEGLIMAGETARGETTEXTURE2DOESPROC) eglGetProcAddress ("glEGLImageTargetTexture2DOES");
EGLImageTargetTexture2DOES (GL_TEXTURE_2D, (GLeglImageOES) egl_image);

/* Perform the operations */

glFinish();
NvBuffer2Raw(dmabuf_fd, 0, this->width, this->height, outFrame);

/* FInish the context */

NvDestroyEGLImage (NULL, egl_image);
NvBufferDestroy (dmabuf_fd);

This fashion was working, since I was able to choose the surface layout and copy back-and-forth from a userspace frame into DMA Bufs. However,

Now, instead of allocating the memory, I am retrieving the DMA buf with status = ExtractFdFromNvBuffer ((void *) frame, (int *) &this->dmabuf_fd); and the status is 0 and the dmabuf_fd is different to zero. So, somehow, it tells me that I am able to retrieve the buffer. Also, I am doing in-place transformation in my GStreamer element.

So, the main question here is: does nvvidconv give me block linear memory?


I have inspected the buffer:

if (NvBufferGetParams (dmabuf_fd, &params) == 0) {
      std::cout << "fd: " << params.dmabuf_fd << std::endl;
      std::cout << "nv_buffer: " << params.nv_buffer << std::endl;
      std::cout << "nv_buffer_size: " << params.nv_buffer_size << std::endl;
      std::cout << "pixel_format: " << params.pixel_format << std::endl;
      std::cout << "num_planes: " << params.num_planes << std::endl;
      std::cout << "width[0]: " << params.width[0] << std::endl;
      std::cout << "width[1]: " << params.width[1] << std::endl;
      std::cout << "width[2]: " << params.width[2] << std::endl;
      std::cout << "height[0]: " << params.height[0] << std::endl;
      std::cout << "height[1]: " << params.height[1] << std::endl;
      std::cout << "height[2]: " << params.height[2] << std::endl;
      std::cout << "pitch[0]: " << params.pitch[0] << std::endl;
      std::cout << "pitch[1]: " << params.pitch[1] << std::endl;
      std::cout << "pitch[2]: " << params.pitch[2] << std::endl;
      std::cout << "offset[0]: " << params.offset[0] << std::endl;
      std::cout << "offset[1]: " << params.offset[1] << std::endl;
      std::cout << "offset[2]: " << params.offset[2] << std::endl;
      std::cout << "layout[0]: " << params.layout[0] << std::endl;
  }

There is only one plane, and layout[0] is zero. According to this enum, it says that it’s a pitch layout. Now the question, is it possible to change the layout from pitch to block linear with an accelerated element?

If you also have some comments or ideas that could help me, I will really appreciate it.

Regards,
Leon.

Hi,
You will receive RGBA pitchlinear buffer from nvvidconv . Suggest you create RGBA blocklinear NvBuffer and call NvBufferTransform() to convert the buffer from pitchlinear to blocklinear.

1 Like

Hi @DaneLLL

I will try it. I will come back as soon as I implement the conversion.

Thanks