Is the buffer corresponding to the bufferdesc parameter CPU buffer or DMA buffer

Hi,In my gstreamer pipeline,If I use the gst_buffer_map interface to map the GstBuffer in appsink_sink_pad_buffer_probe function and get the pointer to NvBufSurface,I can get the bufferdesc param through the NvBufSurface pointer.my simple code is as follows:

static GstPadProbeReturn appsink_sink_pad_buffer_probe(
GstPad *pad, GstPadProbeInfo *info, gpointer u_data) {
GstBuffer *buf = (GstBuffer *)info->data;
GstMapInfo in_map_info;
NvBufSurface *in_surf = NULL;

memset(&in_map_info, 0, sizeof(in_map_info));

/* Map the buffer contents and get the pointer to NvBufSurface. */
if (!gst_buffer_map(buf, &in_map_info, GST_MAP_READ)) {
    GST_ERROR("gst_buffer_map failed\n");
    return GST_PAD_PROBE_OK;
}
in_surf = (NvBufSurface *)in_map_info.data;

printf("NvSurface:batchSize=%d,numFilled=%d,isContiguous=%d,memtype=%d\n",
in_surf->batchSize,
in_surf->numFilled,
in_surf->isContiguous,
in_surf->memType);
printf("NvBufSurfaceParams:w=%d,h=%d,pitch=%d,colorformat=%d,layout=%d,bufferDesc=%ld,dataSize=%d,dataPtr=%p\n",
    in_surf->surfaceList[0].width, in_surf->surfaceList[0].height,
    in_surf->surfaceList[0].pitch,in_surf->surfaceList[0].colorFormat,
    in_surf->surfaceList[0].layout,
    in_surf->surfaceList[0].bufferDesc,in_surf->surfaceList[0].dataSize,
    in_surf->surfaceList[0].dataPtr);
for(int i = 0;i < in_surf->surfaceList[0].planeParams.num_planes;i++)
{
    printf("planeParams:plane[%d],w=%d,h=%d,pitch=%d,offset=%d,psize=%d,bytesPerPix=%d\n",i,
    in_surf->surfaceList[0].planeParams.width[i],
    in_surf->surfaceList[0].planeParams.height[i],
    in_surf->surfaceList[0].planeParams.pitch[i],
    in_surf->surfaceList[0].planeParams.offset[i],
    in_surf->surfaceList[0].planeParams.psize[i],
    in_surf->surfaceList[0].planeParams.bytesPerPix[i]);
}

gst_buffer_unmap(buf, &in_map_info);

return GST_PAD_PROBE_OK;

}

my pipeline: filesrc location=sample_1080p.jpg ! jpegparse ! nvv4l2decoder ! nvstreammux ! nvinfer ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=I420’ ! appsink

After completing the pipeline construction,I added probe callback function on sink pad of appsink.The simple code is as follows:
app_sink_pad = gst_element_get_static_pad(appsink, “sink”);
if (! app_sink_pad)
g_print(“Unable to get sink pad\n”);
else
gst_pad_add_probe(app_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
appsink_sink_pad_buffer_probe, NULL, NULL);
gst_object_unref(pgie_src_pad);

When I run my pipeline,The output parameters in callback function are printed as follows:
NvSurface:batchSize=1,numFilled=1,isContiguous=0,memtype=0
NvBufSurfaceParams:w=1920,h=1080,pitch=2048,colorformat=31,layout=0,bufferDesc=1324,dataSize=3538944,dataPtr=0x7f4c00b3b0
planeParams:plane[0],w=1920,h=1080,pitch=2048,offset=0,psize=2228224,bytesPerPix=1
planeParams:plane[1],w=960,h=540,pitch=1024,offset=2228224,psize=655360,bytesPerPix=1
planeParams:plane[2],w=960,h=540,pitch=1024,offset=2883584,psize=655360,bytesPerPix=1

I want to know if the buffer corresponding to the bufferDesc param is a dma buffer or a cpu buffer.Because the dma buffer output width and height param must be 32 bytes aligned according to the information you have given, the output width end height should be 1920*1088.But the print above is the actual width and height, so I have questions about the properties of the buffer。Hope to give me a clear answer.

It will be different for different device.

The memory size is decided by “pitch” and aligned height but not actual width and height. And the alignment will be different for different color format and device. From the data you print out, what we can tell you is just that it is hardware buffer.

Hi,my hardware platform is Jetson Xavier NX.I have three questions as follows:
(1)Is hardware buffer and DMA buffer the same concept from your description?
(2)If the bufferDesc param is a dma fd,can I access it with low-level functions?,Please have a look at Some questions about NvJPEGDecoder::decodeToFd - #6 by DaneLLL.
I use same interface to access the dma fd from two cases,I get different width and height param.case 1 param is not aligned, but case 2 param is aligned.Why are the obtained parameters different?
(3)What is the meaning of the parameter layout?,what’t the differnce between the ‘NVBUF_LAYOUT_PITCH’ and ‘NVBUF_LAYOUT_BLOCK_LINEAR’?

Hi,I hope you can answer these three questions one by one.Thank you.

  1. No. For the Jetson device, the memType value shows the hardware buffer type. NVBUF_MEM_DEFAULT, NVBUF_MEM_SURFACE_ARRAY and NVBUF_MEM_HANDLE are different types of hardware buffers. I think NVBUF_MEM_HANDLE type is what you call DMA buffer.
  2. NvBuffer and NvBufsurface are totally different.
  3. You can refer to What's the difference between LAYOUT_PITCH and LAYOUT_BLOCKLINEAR?

Hi,
(1)For the first question,I’d like to know if there is any introduction documents on the difference between NVBUF_MEM_SURFACE_ARRAY and NVBUF_MEM_HANDLE.What is the application scenarios of the NVBUF_MEM_HANDLE memory type.I can’t any sample in this memory type.

(2)For the second question, my focus is not on NvBuffer or NvBufsurface.If the buffer type of the hardware buffer corresponding to bufferDesc parameter is NVBUF_MEM_SURFACE_ARRAY,why can I correctly do jpeg encode by using the bufferdesc parameter as the input parameter of the NvJPEGEncoder::encodeFromFd interface,the NvJPEGEncoder::encodeFromFd interface works normally.The annotation for the FD parameter of encodeFromFd interface is as follows:
fd Indicates the file descriptor (FD) of the hardware buffer.
What is the actual required memory type of hardware buffer for the FD parameter.I hope you can discuss this issue with your colleagues in charge of MMAPI.

(1)No. There is no document or sample for the memtype introduction. It is just for internal use.
(2) The NvBuffer is involved in Jetson accelerated gstreamer Tegra Linux Driver, so if you are using accelerated gstreamer, you must use NvBuffer to access the HW buffer. Jetson Linux API Reference: Buffer Manager | NVIDIA Docs

Another system is DeepStream GStreamer Plugin Overview — DeepStream 6.3 Release documentation, it uses NvBufsurface interface.https://docs.nvidia.com/metropolis/deepstream/sdk-api/Buf.html. There is NvBufsurface access sample Implementing a Custom GStreamer Plugin with OpenCV Integration Example — DeepStream 6.3 Release documentation

These two systems are not compatible, you can only use one of them in your application. encodeFromFd is not DeepStream interface, you can not use it in DeepStream pipeline.

Hi,we mainly use DeepStream system.But we have jpeg encode and decode needs in the project.We try to use nvjpegenc and nvjpegdec plugin in the gstreamer pipeline, but it is not flexible enough in the process of using.Occasionally we need to deal with a single JPEG image, rather than a continuous stream of images。So we try to use low-level encode and decode api to deal with the image in the user defined plug-in 。In this process, we hope to use hardware buffer (mapped from the GstBuffer in pipeline) to encode and decode directly, so as to speed up the processing, instead of using software memory.Based on the above requirements, do you have a recommended solution?

DeepStream can encode jpeg image for objects detected by object encoding API. NVIDIA DeepStream SDK API Reference: Main Page
Please check the sample of deepstream-image-meta-test.

For jpeg decoding, nvjpegdec is deepstream plugin. Gst-nvjpegdec — DeepStream 6.1.1 Release documentation, nvv4l2decoder can also decode jpeg image.Gst-nvvideo4linux2 — DeepStream 6.1.1 Release documentation