NvDsInferLayerInfo buffer pointer cannot be mapped to GstBuffer

Hi, I am trying to access the tensor output of a model run using nvinferserver in Deepstream 5.0. My network produces an image output that I need to be able to use.

This is the code I am using in a probe on the src pad of the pgie element in the pipeline.

    GstBuffer *meta_buf = (GstBuffer *) info->data;
    NvDsMetaList * l_frame = NULL;

    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (meta_buf);
    if (batch_meta == NULL){
      g_print("batch buffer is null");

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
    l_frame = l_frame->next) {
      NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);

        for (NvDsMetaList * l_user = frame_meta->frame_user_meta_list;
        l_user != NULL; l_user = l_user->next) {
          NvDsUserMeta *user_meta = (NvDsUserMeta *) l_user->data;
          if (user_meta->base_meta.meta_type != NVDSINFER_TENSOR_OUTPUT_META){
            g_print("Frame number %d is not a tensor", frame_meta->frame_num);

          NvDsInferTensorMeta *meta = (NvDsInferTensorMeta *) user_meta->user_meta_data;
          NvDsInferLayerInfo *layer_info = &meta->output_layers_info[0];
          layer_info->buffer = meta->out_buf_ptrs_host[0];
          GstBuffer *b = (GstBuffer *) layer_info->buffer;
          GstMapInfo out_map_info;

            if (!gst_buffer_map (b, &out_map_info, GST_MAP_READ)) {
                g_print ("Error: Failed to map gst buffer\n");
                gst_buffer_unmap (b, &out_map_info);
                return GST_PAD_PROBE_OK;

The mapping fails as it cannot assert that b is a buffer. Other information in NvDsLayerInfo is populated correctly such as the inferDims. Do you know how I can access the image output of my network?

It is impossible to do that. Why do you want to attach the tensor output layer buffer to GstBuffer? The layer buffer is not created by GstBuffer allocator, it can not be identified by GstBuffer.

I want to be able to see the image from my tensor output layer, is there a way to do this?

I was thinking I could do a memcpy as from inferDims I have the size of the output layer which I can use to work out the number of bytes to copy. However this produces an entirely gray image so I’m not sure that it is copying the correct thing.

Do you want to see the layer buffer(which is actually a picture) on screen while the app is runing? So what to display when there is nothing detected by the model?

There are “dataType” and “inferDims” for you to know the layer buffer size. But you may know what is the data, is it “RGB” , “YUV” or other things.

My model is not a detection, segmentation or classification model and it produces an image output for every frame which I want to see while the app is running. Can I do this?

It is not possible with current DeepStream.

Could you please explain why?

Have you noticed that the caps of DeepStream plugins? “video/x-raw(memory:NVMM)” is not standard gstreamer caps, it is Nvidia specific caps, that means DeepStream transfers video data in special format and type. In order to use Nvidia hardware capability, current video data in GstBuffer with DeepStream plugins is NvBufSurface. This specific data structure is for Nvidia hardware including GPU, VENC,… to use. The buffer inside it is special for HW.
Current DeepStream transfers model output in batch meta data with GstBuffer. There is no plugin to support the function of replacing NvBufSurface with meta data. It is also impossible to attach any buffer to GstBuffer in pad probe callback because it should be NvBufSurface and the caps of input buffer has been decided by plugin negotiation when the pipeline start, we don’t know whether the new buffer can meet the pad caps.

Ok thank you for explaining that.

In sources/apps/apps-common/includes/src/deepstream_primary_gie.c there is a function that is called write_infer_output_to file which made me think that it was possible to access and view the output of the network. Do you know how I can use this?

The sample code of deepstream-image-meta-test shows how to store image meta to jpeg image files. You may refer to it. nvds_obj_enc_process() can help to encode the image into jpeg format.

Seems you have not attached the model output to user meta yet. So “raw-output-generated-callback” and “raw-output-generated-userdata” properties are useful for you. You need to write the callback function by yourself. sources/apps/apps-common/includes/src/deepstream_primary_gie.c is a good sample for you.

Sorry, I don’t understand. Do I need to create custom user meta data and attach the model output?

There are two things.

  1. The model is working inside nvinfer plugin, but your model is a customized model, so nvinfer plugin need to know how to attach the model output to Nvidia meta data. Nvidia meta data is the only way to output model output to outside the plugin. So you need to implement raw-output-generated-callback function by yourself, and enable the callback with setting “raw-output-generated-callback” and “raw-output-generated-userdata” properties for nvinfer plugin.
  2. After you finished above steps. You can get model output with user meta data in DeepStream pipeline. So in your DeepStream app codes, you can get user meta with pad probe, as what is implemented in deepstream-image-meta-test sample codes. After you get the user meta, you can get your data, and you can save it as you like.

Thank you so much! I’ll try that now :)