Package the tensorRT inference results into GstBuffer and push them to the downstream

Please provide complete information as applicable to your setup.

Hardware Platform (jetson xavier nx )
DeepStream Version 6.0.1
JetPack Version (valid for Jetson only) 4.6.2
TensorRT Version 8.2
NVIDIA GPU Driver Version (valid for GPU only)


The flow sequence I use is shown in Attachment.
I use Gst-nvinfer to call my model(Super resolution model) to get the inference result NvDsInferLayerInfo.
I use Gst-nvdsvideotemplate to receive the data (NvDsInferLayerInfo).
I want to use NvDsInferLayerInfo data to update the data of GstBuffer and push it to downstream plug-ins.
How to update NvDsInferLayerInfo data to outBuffer(GstBuffer) ?
The code is as follows:

/* Output Processing Thread */
void EnhancerAlgorithm::OutputThread(void)
{
GstFlowReturn flow_ret;
GstBuffer *outBuffer = NULL;
NvBufSurface *outSurf = NULL;
int num_in_meta = 0;
int video_out_width = 0;
int video_out_height = 0;

NvDsBatchMeta *batch_meta = NULL;
NvDsInferLayerInfo *outInfo = NULL;

NvBufSurfTransform_Error err = NvBufSurfTransformError_Success;
std::unique_lockstd::mutex lk(m_processLock);
/* Run till signalled to stop. */
while (1) {

/* Wait if processing queue is empty. */
if (m_processQ.empty()) {
  if (m_stop == TRUE) {
    break;
  }
  m_processCV.wait(lk);
  continue;
}

PacketInfo packetInfo = m_processQ.front();
m_processQ.pop();

m_processCV.notify_all();
lk.unlock();

// Add custom algorithm logic here
// Once buffer processing is done, push the buffer to the downstream by
// using gst_pad_push function

NvBufSurface *in_surf = getNvBufSurface (packetInfo.inbuf);
batch_meta = gst_buffer_get_nvds_batch_meta (packetInfo.inbuf);
if (!batch_meta) {
  GST_ELEMENT_ERROR (m_element, STREAM, FAILED,
    ("%s:No batch meta available", __func__), (NULL));
  return;
}
num_in_meta = batch_meta->num_frames_in_batch;
// printf("num_in_meta: %d \n",num_in_meta);
//First getting the bbox of faces and eyes
NvDsMetaList * l_frame = NULL;

for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
   l_frame = l_frame->next) {
  NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
  /* Iterate object metadata in frame */
  for (NvDsMetaList * l_user = frame_meta->frame_user_meta_list; l_user != NULL;
      l_user = l_user->next){

    NvDsUserMeta *user_meta = (NvDsUserMeta *)l_user->data;
    if (user_meta->base_meta.meta_type != NVDSINFER_TENSOR_OUTPUT_META)
      continue;

    NvDsInferTensorMeta *meta = (NvDsInferTensorMeta *) user_meta->user_meta_data;
    //for (unsigned int i = 0; i < meta->num_output_layers; i++) {
    NvDsInferLayerInfo *info = &meta->output_layers_info[0];
    info->buffer = meta->out_buf_ptrs_host[i];
    outInfo = info;
    video_out_height = outInfo->inferDims.d[1];
    video_out_width = outInfo->inferDims.d[2];
      // printf("in_surf colorformat =%d\n", in_surf->surfaceList[frame_meta->batch_id].colorFormat);
    //}
  }
}

if(!outInfo || video_out_height <= 0 || video_out_width <= 0){
  printf("The model inference result is error . \n");
  return;
}

std::cout<<"Shape "<<outInfo->inferDims.numElements<<std::endl;
printf("layer name: %s \n",outInfo->layerName);
printf("frame_width: %d \n",video_out_width);
printf("frame_height: %d \n",video_out_height);
printf("******************************* \n");

// Transform IP case
outSurf = in_surf;
outBuffer = packetInfo.inbuf; 

// gint size = video_out_width * video_out_height * 3 / 2;
// outBuffer = gst_buffer_new_allocate(NULL, size, NULL);

// Output buffer parameters checking
if (outSurf->numFilled != 0)
{
    g_assert ((guint)m_outVideoInfo.width == outSurf->surfaceList->width);
    g_assert ((guint)m_outVideoInfo.height == outSurf->surfaceList->height);
}

flow_ret = gst_pad_push (GST_BASE_TRANSFORM_SRC_PAD (m_element), outBuffer);
printf("CustomLib: %s in_surf=%p, Pushing Frame %d to downstream..."
    " flow_ret = %d TS=%" GST_TIME_FORMAT " \n",  __func__, in_surf,
    packetInfo.frame_num, flow_ret, GST_TIME_ARGS(GST_BUFFER_PTS(outBuffer)));
GST_DEBUG ("CustomLib: %s in_surf=%p, Pushing Frame %d to downstream..."
    " flow_ret = %d TS=%" GST_TIME_FORMAT " \n",  __func__, in_surf,
    packetInfo.frame_num, flow_ret, GST_TIME_ARGS(GST_BUFFER_PTS(outBuffer)));

lk.lock();
continue;

}

  1. you need to create a new GstBuffer because in_surf will not be send to downstream.
  2. you need to create a NvBufSurface to save model’s output data, new GstBuffer will link to this new NvBufSurface, here are some sample to create and parse , Jetson Nano CSI Raspberry Pi Camera V2 upside down video when run an example with deepstream-app - #7 by DaneLLL
    RTSP camera access frame issue
  3. you need to set new caps( width , height, format)in GetCompatibleCaps of nvvideotemplate because buffer 's caps changed.

Thank you very much for your reply.
I have tried, but how can I copy the model’s output data to NvBufSurface ?
Am I using the right way?
The model’s output data is NvDsInferTensorMeta *meta = (NvDsInferTensorMeta *) user_meta->user_meta_data.

NvBufSurface *in_surf = getNvBufSurface (packetInfo.inbuf);
batch_meta = gst_buffer_get_nvds_batch_meta (packetInfo.inbuf);
    if (!batch_meta) {
  GST_ELEMENT_ERROR (m_element, STREAM, FAILED,
    ("%s:No batch meta available", __func__), (NULL));
  return;
}
num_in_meta = batch_meta->num_frames_in_batch;
//First getting the bbox of faces and eyes
NvDsMetaList * l_frame = NULL;
for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
   l_frame = l_frame->next) {
  NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
  /* Iterate object metadata in frame */
  for (NvDsMetaList * l_user = frame_meta->frame_user_meta_list; l_user != NULL;
      l_user = l_user->next){

    NvDsUserMeta *user_meta = (NvDsUserMeta *)l_user->data;
    if (user_meta->base_meta.meta_type != NVDSINFER_TENSOR_OUTPUT_META)
      continue;

    NvDsInferTensorMeta *meta = (NvDsInferTensorMeta *) user_meta->user_meta_data;
    infertensorMeta = meta;
    //for (unsigned int i = 0; i < meta->num_output_layers; i++) {
    NvDsInferLayerInfo *info = &meta->output_layers_info[0];
    info->buffer = meta->out_buf_ptrs_host[0];
    outInfo = info;
    video_out_height = outInfo->inferDims.d[1];
    video_out_width = outInfo->inferDims.d[2];
      // printf("in_surf colorformat =%d\n", in_surf->surfaceList[frame_meta->batch_id].colorFormat);
    //}
  }
}

if(!outInfo || video_out_height <= 0 || video_out_width <= 0){
  printf("The model inference result is error . \n");
  return;
}

std::cout<<"Shape "<<outInfo->inferDims.numElements<<std::endl;
printf("layer name: %s \n",outInfo->layerName);
printf("frame_width: %d \n",video_out_width);
printf("frame_height: %d \n",video_out_height);
printf("******************************* \n");

if (m_transformMode) {
    if (hw_caps == true)
    {
        // printf("11111111111111111111111111111111111111 \n");
        // set surface transform session when transform mode is on
        int err = NvBufSurfTransformSetSessionParams(&m_config_params);
        if (err != NvBufSurfTransformError_Success) {
            GST_ERROR_OBJECT (m_element, "Set session params failed");
            return;
        }
        // Transform mode, hence transform input buffer to output buffer
        GstBuffer *newGstOutBuf = NULL;
        GstFlowReturn result = GST_FLOW_OK;
        result = gst_buffer_pool_acquire_buffer (m_dsBufferPool, &newGstOutBuf, NULL);
        if (result != GST_FLOW_OK)
        {
            //GST_ERROR_OBJECT (m_element, "InsertCustomFrame failed error = %d, exiting...", result);
            exit(-1);
        }
        // Copy meta and transform if required
        if (!gst_buffer_copy_into (newGstOutBuf, packetInfo.inbuf, GST_BUFFER_COPY_META, 0, -1)) {
            GST_DEBUG_OBJECT (m_element, "Buffer metadata copy failed \n");
        }
        nvds_set_input_system_timestamp (newGstOutBuf, GST_ELEMENT_NAME(m_element));
        // Copy previous buffer to new buffer, repreat the frame

        GstBuffer *buf = newGstOutBuf;
        GstMapInfo outmap = GST_MAP_INFO_INIT;
        gst_buffer_map (buf, &outmap, GST_MAP_WRITE);
        NvBufSurface*  surface = (NvBufSurface *)outmap.data;

        NvBufSurfTransformRect src_rect, dst_rect;
        src_rect.top   = 0;
        src_rect.left  = 0;
        src_rect.width = (guint) surface->surfaceList[0].width;
        src_rect.height= (guint) surface->surfaceList[0].height;

        printf("surface->surfaceList[0].colorFormat: %d \n",surface->surfaceList[0].colorFormat);
        printf("surface->surfaceList[0].height: %d \n",surface->surfaceList[0].height);

        dst_rect.top   = 0;
        dst_rect.left  = 0;
        dst_rect.width = (guint) surface->surfaceList[0].width;
        dst_rect.height= (guint) surface->surfaceList[0].height;

        NvBufSurface *dst_surface = NULL;
        NvBufSurfaceCreateParams nvbufsurface_create_params;

        nvbufsurface_create_params.gpuId  = surface->gpuId;
        nvbufsurface_create_params.width  = (gint) surface->surfaceList[0].width;
        nvbufsurface_create_params.height = (gint) surface->surfaceList[0].height;
        nvbufsurface_create_params.size = 0;
        nvbufsurface_create_params.colorFormat = surface->surfaceList[0].colorFormat;
        nvbufsurface_create_params.layout = surface->surfaceList[0].layout;
        nvbufsurface_create_params.memType = surface->memType;

        NvBufSurfaceCreate(&dst_surface,1,&nvbufsurface_create_params);

        NvBufSurfTransformParams nvbufsurface_params;
        nvbufsurface_params.src_rect = &src_rect;
        nvbufsurface_params.dst_rect = &dst_rect;
        nvbufsurface_params.transform_flag =  0;
        nvbufsurface_params.transform_filter = NvBufSurfTransformInter_Default;

        NvBufSurfTransformConfigParams transform_config_params;
        NvBufSurfTransform_Error err11;

        transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
        transform_config_params.gpu_id = surface->gpuId;
        transform_config_params.cuda_stream = NULL;
        err11 = NvBufSurfTransformSetSessionParams (&transform_config_params);
        // copy to dst_surface
        err11 = NvBufSurfTransform (surface, dst_surface, &nvbufsurface_params);
        // rototate 180 degree to original surface
        // nvbufsurface_params.transform_flag =  NVBUFSURF_TRANSFORM_FLIP;
        // nvbufsurface_params.transform_flip = NvBufSurfTransform_Rotate180;
        // err = NvBufSurfTransform (dst_surface, surface, &nvbufsurface_params);


        // NvBufSurfaceDestroy(dst_surface);
        gst_buffer_unmap (buf, &outmap);


        NvBufSurface *out_surf =  dst_surface;//getNvBufSurface (newGstOutBuf);
        if (!in_surf || !out_surf)
        {
            g_print ("CustomLib: NvBufSurface not found in the buffer...exiting...\n");
            exit(-1);
        }


        out_surf->numFilled = in_surf->numFilled;
        out_surf->memType=in_surf->memType;
        out_surf->surfaceList[0].colorFormat = in_surf->surfaceList[0].colorFormat;
        // Enable below code to copy the frame, else it will insert GREEN frame
        // if (1)
        // {
        //     NvBufSurfTransformParams transform_params;
        //     transform_params.transform_flag = NVBUFSURF_TRANSFORM_FILTER;
        //     transform_params.transform_flip = NvBufSurfTransform_None;
        //     transform_params.transform_filter = NvBufSurfTransformInter_Default;

        //     NvBufSurfTransform (in_surf, out_surf, &transform_params);
        // }

        save_transformed_plate_images_jetson(out_surf,infertensorMeta->out_buf_ptrs_host[0]);
        for (uint frameIndex = 0; frameIndex < out_surf->numFilled; frameIndex++) {
          void *src_data = NULL;
          src_data = (char *)malloc(out_surf->surfaceList[frameIndex].dataSize);
          if (src_data == NULL) {
            g_print("Error: failed to malloc src_data \n");
          }

          NvBufSurfaceMap (out_surf, -1, -1, NVBUF_MAP_READ);
          NvBufSurfacePlaneParams *pParams = &out_surf->surfaceList[frameIndex].planeParams;
          unsigned int offset = 0;
          for(unsigned int num_planes=0; num_planes < pParams->num_planes; num_planes++){
              if(num_planes>0)
                  offset += pParams->height[num_planes-1]*(pParams->bytesPerPix[num_planes-1]*pParams->width[num_planes-1]);
              for (unsigned int h = 0; h < pParams->height[num_planes]; h++) {
                memcpy((void *)(src_data+offset+h*pParams->bytesPerPix[num_planes]*pParams->width[num_planes]),
                      (void *)((char *)infertensorMeta->out_buf_ptrs_host[0]+h*pParams->pitch[num_planes]),
                      pParams->bytesPerPix[num_planes]*pParams->width[num_planes]
                      );
              }
          }
          NvBufSurfaceSyncForDevice (out_surf, -1, -1);
          NvBufSurfaceUnMap (out_surf, -1, -1);
        }

        // printf("frame_width: %d \n",outSurf->surfaceList[0].width);
        // printf("frame_height: %d \n",outSurf->surfaceList[0].height);
        // printf("******************************* \n");

        outSurf = out_surf;
        outBuffer = newGstOutBuf;

        GST_BUFFER_PTS (outBuffer) = GST_BUFFER_PTS (packetInfo.inbuf);
        // Unref the input buffer
        gst_buffer_unref(packetInfo.inbuf);


// Output buffer parameters checking
if (hw_caps == true)
{
if (outSurf->numFilled != 0)
{
g_assert ((guint)m_outVideoInfo.width == outSurf->surfaceList->width);
g_assert ((guint)m_outVideoInfo.height == outSurf->surfaceList->height);
}
}

nvds_set_output_system_timestamp (outBuffer, GST_ELEMENT_NAME(m_element));
flow_ret = gst_pad_push (GST_BASE_TRANSFORM_SRC_PAD (m_element), outBuffer);
GST_DEBUG_OBJECT (m_element, "CustomLib: %s in_surf=%p, Pushing Frame %d to downstream... flow_ret = %d TS=%" GST_TIME_FORMAT " \n",
        __func__, in_surf, packetInfo.frame_num, flow_ret, GST_TIME_ARGS(GST_BUFFER_PTS(outBuffer)));

lk.lock();
continue;

Please refer to this sample Deepstream sample code snippet it will create a new NvBufSurface to accept rgb data.

/* Helped function to get the NvBufSurface from the GstBuffer */
NvBufSurface *DSCustomLibraryBase::getNvBufSurface (GstBuffer *inbuf)
{
GstMapInfo in_map_info;
NvBufSurface *nvbuf_surface = NULL;

/* Map the buffer contents and get the pointer to NvBufSurface. */
if (!gst_buffer_map (inbuf, &in_map_info, GST_MAP_READ)) {
    GST_ELEMENT_ERROR (m_element, STREAM, FAILED,
        ("%s:gst buffer map to get pointer to NvBufSurface failed", __func__), (NULL));
    return NULL;
}

// Assuming that the plugin uses DS NvBufSurface data structure
nvbuf_surface = (NvBufSurface *) in_map_info.data;

gst_buffer_unmap(inbuf, &in_map_info);
return nvbuf_surface;

}

GstBuffer *newGstOutBuf = NULL;

//Create a new gstbuffer (newGstOutBuf ) based on the old gstbuffer (packetInfo.inbuf)
gst_buffer_copy_into (newGstOutBuf, packetInfo.inbuf, GST_BUFFER_COPY_META, 0, -1)

//Create a new NvBufSurface (out_surf ) based on newGstOutBuf
NvBufSurface *out_surf = getNvBufSurface (newGstOutBuf);

//inference results
NvDsInferTensorMeta *meta = (NvDsInferTensorMeta *) user_meta->user_meta_data;
void * outBuffer = meta->out_buf_ptrs_host[0]

Hello,
I have created a new NvBufSurface (out_surf),
I want to replace out_surf->surfaceList[i].mappedAddr.addr[0] with outBuffer .
What should I do?
I try memcpy(out_surf->surfaceList[frameIndex].mappedAddr.addr[0], outBuffer, out_surf->surfaceList[frameIndex].dataSize),but failed.
So what should the third parameter be in the memcpy method ?
Is there any code for reference ?

Hi @8389128 ,
What’s the frame type of outBuffer? Is it only one plane? Did you ceate the new output NvBufSurface with the same format?
Since the pitch may be larger than width in output NvBufSurface , you may need to refer to DeepStream SDK FAQ - #17 by mchi to copy the data line by line.

So far, even ifyou copy incomplete data into output NvBufSurface, can you get some output from sink/render? I mean, do you see current solution should work?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.