To acquire a gray cv mat from nv12 nvbufsurface

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): jetson
• DeepStream Version: 6.01
**• JetPack Version (valid for Jetson only)**5.02
• TensorRT Version:8.4
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

below is nvbufsurface info:
batchSize = 1
numFilled = 1
memType = 4
surfaceList[0]:
width = 800
height = 512
pitch = 832
layout = 1
colorFormat = 6
bufferDesc = 152
dataSize = 786432
dataPtr = 0xaaaacb25ec50
it’s NVBUF_COLOR_FORMAT_NV12!
NvBufSurfaceParams->planeParams:
num_planes = 2
width[0] = 800
height[0] = 512
pitch[0] = 832
offset[0] = 0
psize[0] = 524288
bytesPerPix[0] = 1
width[1] = 400
height[1] = 256
pitch[1] = 832
offset[1] = 524288
psize[1] = 262144
bytesPerPix[1] = 2
NvBufSurfaceParams->mappedAddr:
mappedAddr->addr[0] = (nil)
mappedAddr->addr[1] = (nil)
mappedAddr->addr[2] = (nil)
mappedAddr->addr[3] = (nil)
mappedAddr->eglImage is NULL
mappedAddr->addr[0] = 0xfffee25b7000
mappedAddr->addr[1] = 0xfffee2237000
mappedAddr->addr[2] = (nil)
mappedAddr->addr[3] = (nil)

I add following piece of code in preprocess.cpp didnot give correct image:
// Map CUDA Unified Memory to CPU memory

NvBufSurfaceMap(in_surf, 0, 0, NVBUF_MAP_READ);

NvBufSurfaceSyncForCpu(in_surf, 0, 0);

// Convert NV12 to grayscale

cv::Mat gray(in_surf->surfaceList[0].height, in_surf->surfaceList[0].width, CV_8UC1);

uchar *data = (guint8 *) in_surf->surfaceList[0].mappedAddr.addr[0];

guint stride = in_surf->surfaceList[0].pitch;

// (uchar *) mAddr->addr[0];

for (uint32_t j = 0; j < in_surf->surfaceList[0].height; j++) {

for (uint32_t i = 0; i < in_surf->surfaceList[0].width; i++) {

gray.at(j, i) = data[j * stride + i];

}

}

// Unmap CUDA Unified Memory

NvBufSurfaceUnMap(in_surf, 0, 0);

// Save grayscale image

cv::imwrite(“output_gray.png”, gray);

Please give me more advice, thank you.

The psize[0]=524288, = 512*1024, here 512 is height of Y-plane, but 1024 has no relation with width = 800 , nor pitch =832, I donnot know how to read 512x800 Y-plane data, how to solve the boundary alignment.

This is the hardware alignment requirement for memory, you don’t need to care about it.

Just read the data according to pitch and height.

I have tested the following code on DS-7.0.

GstMapInfo in_map_info;
    if (!gst_buffer_map(buf, &in_map_info, GST_MAP_READ)) {
      g_print("Error: Failed to map gst buffer\n");
      return GST_PAD_PROBE_OK;
    }
    NvBufSurface *surface = (NvBufSurface *)in_map_info.data;
    // TODO for cuda device memory we need to use cudamemcpy
    NvBufSurfaceMap(surface, -1, -1, NVBUF_MAP_READ);
    #ifdef PLATFORM_TEGRA
    /* Cache the mapped data for CPU access */
    if (surface->memType == NVBUF_MEM_SURFACE_ARRAY) {
      NvBufSurfaceSyncForCpu(surface, 0, 0);
    }
    #endif
    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
          l_frame = l_frame->next) {
      NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
      guint height = surface->surfaceList[frame_meta->batch_id].height;
      guint width = surface->surfaceList[frame_meta->batch_id].width;

      // make mat from inter-surface buffer
      cv::Mat rawmat(height, width, CV_8UC1,
                  surface->surfaceList[frame_meta->batch_id].mappedAddr.addr[0],
                  surface->surfaceList[frame_meta->batch_id].planeParams.pitch[0]);
      if (frame_number % 300 == 0) {
        // w 1920 h 1080 pitch 2048 psize 2228224 == 2048 x 1088
        printf("w %d h %d pitch %d psize %d layout %d\n", width, height,
            surface->surfaceList[frame_meta->batch_id].planeParams.pitch[0],
            surface->surfaceList[frame_meta->batch_id].planeParams.psize[0],
            surface->surfaceList[frame_meta->batch_id].layout);
        char file_name[256] = {0};
        snprintf(file_name, sizeof(file_name), "frame-%d.png", frame_number);
        cv::imwrite(file_name, rawmat);
      }
    }
    #ifdef PLATFORM_TEGRA
    if (surface->memType == NVBUF_MEM_SURFACE_ARRAY) {
      NvBufSurfaceSyncForDevice(surface, 0, 0);
    }
    #endif
    NvBufSurfaceUnMap(surface, -1, -1);
    gst_buffer_unmap(buf, &in_map_info);

By the way, can you share why you need to convert to gray format? Maybe Deepstream has a better way to do it.

Thank you much. I’ll try above code.
And the reason for I need gray is I am doing adaptive roi preprocess for each frame, anyway above code is greatly helpful.

hi,
I ran above code, below printed:
w 800 h 512 pitch 1024 psize 524288 layout 0
I attached original image which encoded into h264 and the image acquired from above code added in preprocess plugin .
It seems that there’s requirement for the size of input h264?



Please further help.

My ecoding para:
width = 800
height =640
command = [‘ffmpeg’,
‘-y’,
# ‘-fflags’, ‘genpts’,
‘-f’, ‘rawvideo’,
‘-vcodec’, ‘rawvideo’, #这两行为获取的原始图像格式,可能需修改
‘-pix_fmt’, ‘bgr24’,
# ‘-pix_fmt’, ‘yuv420p’,
‘-s’, “{}x{}”.format(width, height),
# ‘-loop’, ‘1’,
# ‘-pattern_type glob’, -i ‘./locpic/*.jpg’’
‘-r’, str(fps_pub), #以某帧频推流
‘-i’, ‘-’,
‘-c:v’, ‘libx264’,
‘-pix_fmt’, ‘yuv420p’,
# ‘-rtcp_interval’, ‘1’,
# ‘-bufsize’, ‘500000’,
# ‘-rtbufsize’, ‘500000’,
‘-tune’, ‘zerolatency’, #以0延迟推流
# ‘-creation_time’, ‘now’,
‘-rtsp_transport’, ‘udp’, #udp方式传输
‘-vprofile’, ‘baseline’, #无B帧,只有I,P帧,这样才可能无延迟
‘-preset’, ‘ultrafast’, #编码超快
‘-an’, #无音频
‘-f’, ‘rtsp’,
rtsp_url]

The actual image is 800x512, if I use width=800, height=512 to publish h264, in deepstream preprocess, the pitch =832, and image acquired completely not recognizable. so I use width=800, height=640 to publish, and the related pitch = 1024, I acuired an image as attached. It’s smaller, shifted. and it seems containing lower part of previous frame, most part of current frame.
How to fix that please.

  1. nvdspreprocess supports set the color format of model, set the value of network-color-format to 2 in the configuration file.

  2. If you want change roi for each frame, there are two choices.
    a.Use gst_nvevent_new_roi_update to create a roi update event. then send the event with gst_element_send_event , It’s open source.
    /opt/nvidia/deepstream/deepstream/sources/libs/gstnvcustomhelper

    b.Use nvmultiurisrcbin with restful api,Like

curl -XPOST 'http://localhost:9000/api/v1/roi/update' -d '{
   "stream_id":"xxxx",
   "roi_count":1,
   "roi":[
      {
       "roi_id":"0"
         "left":0,
         "top":0,
         "width":1920,
         "height":1080
      }
   ]
}

It’s open source too, Refer /opt/nvidia/deepstream/deepstream/sources/libs/nvds_rest_server/nvds_roi_parse.cpp to set the parameters of roi update.
In fact, nvmultiurisrcbin also uses gst_nvevent_new_roi_update to update roi

There is no such limitation. What is the problem you are facing now?

hi,
As for size of input h264, when publishing 800x512 size of h264, the nvbufsurface pitch=832, and image acuried as above method is chaos; when I publish with 800x576 h264,(the ecoded picture keeps the same:800x512), the pitch=1024, it seems a good value, and I auired a image like attached, apparently more regular. So, it seems nvbufsurface pitch varies on the input size.

Another question,
when using: define dump_rois; I got:
NvBufSurfTransform failed with error -3
ERROR from preprocess0: Custom Transformation from library failed

Debug info: gstnvdspreprocess.cpp(1060): group_transformation (): /GstPipeline:pipeline/GstBin:preprocess_bin/GstNvDsPreProcess:preprocess0
ERROR from preprocess0: Group 0 : group transformation failed

Debug info: gstnvdspreprocess.cpp(1375): gst_nvdspreprocess_on_frame (): /GstPipeline:pipeline/GstBin:preprocess_bin/GstNvDsPreProcess:preprocess0
Quitting
App run failed

This issue is hardware related.But this is not a size limitation of nvbufsurface.

If the width and height of your video are the same as those set in nvstreammux,
nvstreammux will pass through, which means no scaling will occur.
At this time the value of layout is NVBUF_LAYOUT_BLOCK_LINEAR, but the above code only support NVBUF_LAYOUT_PITCH.

Data order of block linear format is not open for public.
Refer this FAQ

-3 represents NvBufSurfTransformError_Invalid_Params.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.