How to custom preprocess in SGIE base on Deepstream 5.0?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
dGPU
• DeepStream Version
deepstream 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
7.0
• NVIDIA GPU Driver Version (valid for GPU only)
cuda 10.2
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

my question is similar like Image pre-processing between PGIE and SGIE

i plan to build a pipeline for face deetcting, the pipeline like this:
PGIE(face detection)---->SGIE1(face landmark predict)---->SGIE2(face attribute predict (ages? male or female? with mask?))

preprocess of SGIE2 need the SGIE1 result (landmark points) to do face algin(similar transform or affine transform). Assume that I can save the SGIE1 results to ObjectMetaData.

How to do the preprocess of SGIE2?
the answer of Image pre-processing between PGIE and SGIE
is not clearly for me . and I did not find the code memtion in that question(may be sth changed in deepstream 5.0 ?)

Pls check the comment #6 in that topic.
BTW, you can refer DeepStream 5.0 nvinferserver how to use upstream tensor meta as a model input - #4 by giangblackk if you are using nvinferserver

  1. you suggested add probe between pgie and sgie,and do Affine transform on every box in that probe func on the comment #6 , but for our requirement, we need to do the affine transformation on the ROI IMAGE(the image croped from frame based on the bbox),using the landmark points as the “source points”.Then send the transformed ROI image as input image to sgie to do infer. So the transform not on bbox,it’s on the cropped image.
    2.we plan to use nvdsinfer-plugin
    3.for the comment #11 we cannot find the code in Deepstream 5.0

The sgie will crop the ROI from original image based on your modified bbox, if you need to do some affine transformation based on the croped image, then you need to modify the nvinfer preprocess code gstnvinfer.cpp → gst_nvinfer_process_objects

thanks!BTW,where can I find the implement details of gstnvinfer? I‘m afraid can not understand the code clearly without other information.

Check the function code and all the nvinfer is open sourced, you can refer DeepStream SDK FAQ and the nvinfer plugin manual Gst-nvinfer — DeepStream 6.3 Release documentation

thanks! let me check

Have you solved the problem? I’m solving the same problem. Do you have any good suggestions

I still need do some debug,but I just know How to modify gst nvinfer, first you need to read the info that bcao mentioned, then check the code:

and

the first one is preprocess for frame infer,the second one is for object infer.
all those function called a func named “convert_batch_and_push_to_input_thread” this func is used to do preprocess async.

pls read " DeepStream SDK FAQ" to understand some background info.

may it can help u. :)

Thank you. I also saw this place. My idea is to convert the memory before sending in the function convert_batch_and_push_to_input_thread, but I don’t know how to get the target image from GPU and modify it. How do you do it?
I think it can be modified here (In function get_converted_buffer), just like Pad the scaled image with black color:

static GstFlowReturn
get_converted_buffer (GstNvInfer * nvinfer, NvBufSurface * src_surf,
    NvBufSurfaceParams * src_frame, NvOSD_RectParams * crop_rect_params,
    NvBufSurface * dest_surf, NvBufSurfaceParams * dest_frame,
    gdouble & ratio_x, gdouble & ratio_y, void *destCudaPtr)
{
  guint src_left = GST_ROUND_UP_2 ((unsigned int)crop_rect_params->left);
  guint src_top = GST_ROUND_UP_2 ((unsigned int)crop_rect_params->top);
  guint src_width = GST_ROUND_DOWN_2 ((unsigned int)crop_rect_params->width);
  guint src_height = GST_ROUND_DOWN_2 ((unsigned int)crop_rect_params->height);
  guint dest_width, dest_height;

  if (nvinfer->maintain_aspect_ratio) {
    /* Calculate the destination width and height required to maintain
     * the aspect ratio. */
    double hdest = dest_frame->width * src_height / (double) src_width;
    double wdest = dest_frame->height * src_width / (double) src_height;
    int pixel_size;
    cudaError_t cudaReturn;

    if (hdest <= dest_frame->height) {
      dest_width = dest_frame->width;
      dest_height = hdest;
    } else {
      dest_width = wdest;
      dest_height = dest_frame->height;
    }

    switch (dest_frame->colorFormat) {
      case NVBUF_COLOR_FORMAT_RGBA:
        pixel_size = 4;
        break;
      case NVBUF_COLOR_FORMAT_RGB: //RGB
        pixel_size = 3;////////////////////////////
        break;
      case NVBUF_COLOR_FORMAT_GRAY8:
      case NVBUF_COLOR_FORMAT_NV12:
        pixel_size = 1;
        break;
      default:
        g_assert_not_reached ();
        break;
    }

    /* Pad the scaled image with black color. */
    cudaReturn =
        cudaMemset2DAsync ((uint8_t *) destCudaPtr + pixel_size * dest_width,
        dest_frame->planeParams.pitch[0], 0,
        pixel_size * (dest_frame->width - dest_width), dest_frame->height,
        nvinfer->convertStream);
    if (cudaReturn != cudaSuccess) {
      GST_ERROR_OBJECT (nvinfer,
          "cudaMemset2DAsync failed with error %s while converting buffer",
          cudaGetErrorName (cudaReturn));
      return GST_FLOW_ERROR;
    }
    cudaReturn =
        cudaMemset2DAsync ((uint8_t *) destCudaPtr +
        dest_frame->planeParams.pitch[0] * dest_height,
        dest_frame->planeParams.pitch[0], 0, pixel_size * dest_width,
        dest_frame->height - dest_height, nvinfer->convertStream);
    if (cudaReturn != cudaSuccess) {
      GST_ERROR_OBJECT (nvinfer,
          "cudaMemset2DAsync failed with error %s while converting buffer",
          cudaGetErrorName (cudaReturn));
      return GST_FLOW_ERROR;
    }
  } else {
    dest_width = nvinfer->network_width;
    dest_height = nvinfer->network_height;
  }
  printf("%d,%d",dest_width,dest_height);
  /* Calculate the scaling ratio of the frame / object crop. This will be
   * required later for rescaling the detector output boxes to input resolution.
   */
  ratio_x = (double) dest_width / src_width;
  ratio_y = (double) dest_height / src_height;

  /* Create temporary src and dest surfaces for NvBufSurfTransform API. */
  nvinfer->tmp_surf.surfaceList[nvinfer->tmp_surf.numFilled] = *src_frame;

  /* Set the source ROI. Could be entire frame or an object. */
  nvinfer->transform_params.src_rect[nvinfer->tmp_surf.numFilled] =
      {src_top, src_left, src_width, src_height};
  /* Set the dest ROI. Could be the entire destination frame or part of it to
   * maintain aspect ratio. */
  nvinfer->transform_params.dst_rect[nvinfer->tmp_surf.numFilled] =
      {0, 0, dest_width, dest_height};

  nvinfer->tmp_surf.numFilled++;

  return GST_FLOW_OK;
}

I featched image and use opencv to do the image process, but I think using NPP API to do image process is the better way(because it on GPU but we dont have experience on this API)

there are two way to get image from gstBuffer:
1.using cudaMemcpy:
code piece as below:
GstBuffer *buf = (GstBuffer *) info->data;

guint num_rects = 0; 

NvDsObjectMeta *obj_meta = NULL;

guint vehicle_count = 0;

guint person_count = 0;

NvDsMetaList * l_frame = NULL;

NvDsMetaList * l_obj = NULL;

GstMapInfo in_map_info;

if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {

      g_print ("Error: Failed to map gst buffer\n");

      gst_buffer_unmap (buf, &in_map_info);

      return GST_PAD_PROBE_OK;

  }

std::vector<cv::Mat>frames;

NvBufSurface *surface = (NvBufSurface *)in_map_info.data;

NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;

  l_frame = l_frame->next) {

    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);

    NvBufSurfaceParams frameHandle=surface->surfaceList[frame_meta->batch_id];

    cv::Mat rawFrame;

    bool color_format_supported=true;

    switch(frameHandle.colorFormat){

      case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_RGBA:

      case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_RGBx:

      case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_BGRA:

      case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_BGRx:

        rawFrame.create(cv::Size(frameHandle.width,frameHandle.height),CV_8UC4);

        break;

      case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_GRAY8:

        rawFrame.create(cv::Size(frameHandle.width,frameHandle.height),CV_8UC1);

        break;

      case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_BGR:

      case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_RGB:

        rawFrame.create(cv::Size(frameHandle.width,frameHandle.height),CV_8UC3);

        break;

      default:{

        g_print("unsupport color format tosave :%d in source:%d frame:%d",frameHandle.colorFormat,frame_meta->source_id,frame_meta->frame_num);

        color_format_supported=false; 

        }

    }

    if(color_format_supported){

        HANDLE_ERROR(cudaMemcpy(rawFrame.data,frameHandle.dataPtr,frameHandle.dataSize,cudaMemcpyDeviceToHost));

        switch(frameHandle.colorFormat){

          case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_RGBA:

          case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_RGBx:

            cv::cvtColor(rawFrame,rawFrame,CV_RGBA2BGR);

            break;

          case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_BGRA:

          case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_BGRx:

            cv::cvtColor(rawFrame,rawFrame,CV_BGRA2BGR);

            break;

          case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_GRAY8:

            cv::cvtColor(rawFrame,rawFrame,CV_GRAY2BGR);

            break;

          case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_BGR:

            break;

          case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_RGB:

            cv::cvtColor(rawFrame,rawFrame,CV_RGB2BGR);

            break;

          default:

            g_print("unsupport color format tosave :%d in source:%d frame:%d",frameHandle.colorFormat,frame_meta->source_id,frame_meta->frame_num);

            color_format_supported=false; 

        }

    }

PS: dont forget call gst_buffer_unmap (buf, &in_map_info); before return

  1. using the global virual memory access (this concept mentioned in cuda.)
    first need set the property for streammux and nvvideoconvert:


    then you can do this to get the image data as cv::Mat:
    bool GetCVMatFromNvBuf(GstBuffer * buf,std::vectorcv::Mat&frames){

    frames.clear();

    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

    frames=std::vectorcv::Mat( batch_meta->num_frames_in_batch);

    GstMapInfo in_map_info;

    //get surface

    if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {

     g_print ("Error: Failed to map gst buffer\n");
    
     gst_buffer_unmap (buf, &in_map_info);
    
     return false;
    

    }

    NvBufSurface *surface = (NvBufSurface *)in_map_info.data;

    NvDsMetaList* l_frame;

    NvBufSurfaceMap (surface, -1, -1, NVBUF_MAP_READ);

    NvBufSurfaceSyncForCpu (surface, 0, 0);

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;l_frame = l_frame->next) {

     cv::Mat tmp;
    
     NvDsFrameMeta* frameInfo=( NvDsFrameMeta*)l_frame->data;
    
     int height=surface->surfaceList[frameInfo->batch_id].height;
    
     int width=surface->surfaceList[frameInfo->batch_id].width;
    
     switch(surface->surfaceList[frameInfo->batch_id].colorFormat){
    
         case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_BGRA:
    
         case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_BGRx:
    
             tmp=cv::Mat(height,width,CV_8UC4,surface->surfaceList[frameInfo->batch_id].mappedAddr.addr[0],surface->surfaceList[frameInfo->batch_id].pitch);
    
             cv::cvtColor(tmp,frames[frameInfo->batch_id],CV_BGRA2BGR);
    
             break;
    
         case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_RGBA:
    
         case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_RGBx:
    
             tmp=cv::Mat(height,width,CV_8UC4,surface->surfaceList[frameInfo->batch_id].mappedAddr.addr[0],surface->surfaceList[frameInfo->batch_id].pitch);
    
             cv::cvtColor(tmp,frames[frameInfo->batch_id],CV_RGBA2BGR);
    
             break;
    
         case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_RGB:
    
             tmp=cv::Mat(height,width,CV_8UC3,surface->surfaceList[frameInfo->batch_id].mappedAddr.addr[0],surface->surfaceList[frameInfo->batch_id].pitch);
    
             cv::cvtColor(tmp,frames[frameInfo->batch_id],CV_RGB2BGR);
    
             break;
    
         case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_BGR:
    
             tmp=cv::Mat(height,width,CV_8UC3,surface->surfaceList[frameInfo->batch_id].mappedAddr.addr[0],surface->surfaceList[frameInfo->batch_id].pitch);
    
             frames[frameInfo->batch_id]=tmp.clone();
    
             break;
    
         case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_GRAY8:
    
             tmp=cv::Mat(height,width,CV_8UC1,surface->surfaceList[frameInfo->batch_id].mappedAddr.addr[0],surface->surfaceList[frameInfo->batch_id].pitch);
    
             cv::cvtColor(tmp,frames[frameInfo->batch_id],CV_GRAY2BGR);
    
             break;
    
          case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_NV12:
    
              tmp=cv::Mat(height*3/2,width,CV_8UC1,surface->surfaceList[frameInfo->batch_id].mappedAddr.addr[0],surface->surfaceList[frameInfo->batch_id].pitch);
    
             cv::cvtColor(tmp,frames[frameInfo->batch_id],CV_YUV2BGR_NV12);
    
             break;
    
         default:
    
             g_print("unsupport color format tosave :%d in source:%d frame:%d",surface->surfaceList[frameInfo->batch_id].colorFormat,frameInfo->source_id,frameInfo->frame_num);
    
             frames.clear();
    
             return false;
    
     }
    

    }

    gst_buffer_unmap (buf, &in_map_info);

    return true;

}

for more about gistBuffer, NvBufSurface,GstNvInferBatch, i think you need to check deepstream api doc and gstreamer doc.

may it can help u. ;)

1 Like

Thank you very much for your timely reply. I learned more!
But as mentioned in Sgie custom preprocessing, we need to modify the data in batch, so the problem is how to extract image data from batch?

if you read the code of gst nvinfer carefully, you can see:the nvinfer->tmp_surf store the source BatchSurface object, the mem->surf converted BatchSurface object. and how to get cv::Mat from BatchSurface youcan reference my code in front comment

What is the relationship between nvinfer->tmp_surf, mem->surf and batch?
What should I do if I want to change the functionality of the NvBufSurfTransform function?
I just want to replace the original Transform result