The pipeline crashes after I add my custom function to gie_processing_done_buf_prob

**• Hardware Platform (Jetson / GPU)**Jetson
• DeepStream Version6.1.1
**• JetPack Version (valid for Jetson only)**5.0.1

Dear all,I am trying to add my custom function to deepstream-app sample,the folder is like :


And no long time before I have raised some topics, and according to some advices,I add my custom funtion to gie_processing_done_buf_prob which is a part of deepstream_app.c,the whole code snippet is:

static GstPadProbeReturn
gie_processing_done_buf_prob (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
  GstBuffer *buf = (GstBuffer *) info->data;
  NvDsInstanceBin *bin = (NvDsInstanceBin *) u_data;
  guint index = bin->index;
  AppCtx *appCtx = bin->appCtx;
  //extra  
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
  // Get original raw data
  GstMapInfo in_map_info;
  if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
      g_print ("Error: Failed to map gst buffer\n");
      gst_buffer_unmap (buf, &in_map_info);
      return GST_PAD_PROBE_OK;
      }
  NvBufSurface *surface = (NvBufSurface *)in_map_info.data;
  for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
      NvDsFrameMeta *frame_meta = l_frame->data;
      //TODO for cuda device memory we need to use cudamemcpy
      NvBufSurfaceMap (surface, -1, -1, NVBUF_MAP_READ);
      /* Cache the mapped data for CPU access */
      NvBufSurfaceSyncForCpu (surface, 0, 0); //will do nothing for unified memory type on dGPU
      guint height = surface->surfaceList[frame_meta->batch_id].height;
      guint width = surface->surfaceList[frame_meta->batch_id].width;
      float angle;
      edgedetection(height,width,surface,frame_meta,& angle);  
       }
  if (gst_buffer_is_writable (buf))
    process_buffer (buf, appCtx, index);
  return GST_PAD_PROBE_OK;
}

And the edgedetection is my custom funtion:

//custom function
void edgedetection(guint height,guint width,NvBufSurface *surface,NvDsFrameMeta *frame_meta,float * angle){
  Mat nv12_mat = Mat(height*3/2, width, CV_8UC1, surface->surfaceList[frame_meta->batch_id].mappedAddr.addr[0],
  surface->surfaceList[frame_meta->batch_id].pitch);
  //Convert nv12 to RGBA to apply algo based on RGBA
  Mat rgba_mat;
  cvtColor(nv12_mat, rgba_mat, COLOR_YUV2BGRA_NV12);
  for (NvDsMetaList * l_obj = frame_meta->obj_meta_list; l_obj != NULL;
  l_obj = l_obj->next) {
    NvDsObjectMeta *obj = (NvDsObjectMeta *) l_obj->data;
    Rect bbox(obj->rect_params.left, obj->rect_params.top,
                obj->rect_params.width, obj->rect_params.height);
    Mat roi = rgba_mat(bbox);
    float angle;
    Ellipse_feature_extraction(roi,angle);  
    }
}

Ellipse_feature_extraction is a funtion to apply edgedetection to get the angle of each detected object:

void Ellipse_feature_extraction(const Mat& img,float& angle) {
    if (img.empty()) {
        printf("Received an empty image\n");
        return;
    }
    Mat imgray;
    cvtColor(img, imgray, COLOR_BGRA2GRAY);

    Mat edges;
    Canny(imgray, edges, 500, 100);

    Mat thresh;
    threshold(edges, thresh, 127, 255, THRESH_BINARY);
    std::vector<std::vector<Point>> contours;
    std::vector<Vec4i> hierarchy;
    findContours(thresh, contours, hierarchy, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);

    for (const auto&amp; cnt : contours) {
        if (cnt.size()>100) {
            RotatedRect rect = minAreaRect(cnt);
            Point2f box[4];
            rect.points(box);
            angle=rect.angle;
        }
    }
}

The logic is that I need to get the raw image data of each frame,namely the rgba_mat in my custom funtion.And in every frame loop,the frame image will be managed in edgedetection function to access every object.After that,I will get the ROI image of each object and apply edge detection using Ellipse_feature_extraction.After that,I may get the angle of each detected object.

Then I sudo make install in the terminal,everything is fine with no errors.But when I try:
deepstream-app -c deepstream_app_config.txt
to start the pipeline.The pipeline seems to crashes with only one frame appear then the pipeline ends:

I must express my sincere thanks if you can read the complete text,I just wonder why the pipeline just ends suddenly,and there must be something I’m not thinking about.

There are two ways to debug your problem.
1.Use the gdb tool to debug the crash issue. This will locate where the crash occurred in your code.

$gdb --args <your_command>
$r
after the crash issue
$bt

2.You can try to dump the image after each step of your procedure to check if the image is correct.

Do you mean that I should do this in the terminal:
$gdb --args deepstream-app -c deepstream_app_config.txt
or
$r --args deepstream-app -c deepstream_app_config.txt
two options?

Then after the crashes,I should do:
$bt --args deepstream-app -c deepstream_app_config.txt

I try the commands you offer:

(gdb) r
Starting program: /usr/bin/deepstream-apptest 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
** ERROR: <main:657>: Specify config file with -c option
Quitting
App run failed
[Inferior 1 (process 8843) exited with code 0377]
(gdb) bt
No stack.
(gdb)

I don’t know if my procedure is right

It seems that the problem is on the edgedetection funtion ,so I try to annotate some codes to debug the code snippet:

//custom function
void edgedetection(guint height,guint width,NvBufSurface *surface,NvDsFrameMeta *frame_meta,float * angle){
  Mat nv12_mat = Mat(height*3/2, width, CV_8UC1, surface->surfaceList[frame_meta->batch_id].mappedAddr.addr[0],
  surface->surfaceList[frame_meta->batch_id].pitch);
  //Convert nv12 to RGBA to apply algo based on RGBA
  Mat rgba_mat;
  cvtColor(nv12_mat, rgba_mat, COLOR_YUV2BGRA_NV12);
  for (NvDsMetaList * l_obj = frame_meta->obj_meta_list; l_obj != NULL;
  l_obj = l_obj->next) {
    NvDsObjectMeta *obj = (NvDsObjectMeta *) l_obj->data;
    Rect bbox(obj->rect_params.left, obj->rect_params.top,
                obj->rect_params.width, obj->rect_params.height);
    //Mat roi = rgba_mat(bbox);
    //float angle;
    //Ellipse_feature_extraction(roi,angle);  
    }
}

Then I find that if I delete this line:
cvtColor(nv12_mat, rgba_mat, COLOR_YUV2BGRA_NV12);
Then the pipeline works fine.

But all the codes about getting raw images come from some suggestions the forum offers.I don’t exactly know how to change my codes.

Could you dump the image from the rgba_mat and nv12_mat and check if that is right?

Do you mean that I should use nv12_mat directly?:

//custom function
void edgedetection(guint height,guint width,NvBufSurface *surface,NvDsFrameMeta *frame_meta,float * angle){
  Mat nv12_mat = Mat(height*3/2, width, CV_8UC1, surface->surfaceList[frame_meta->batch_id].mappedAddr.addr[0],
  surface->surfaceList[frame_meta->batch_id].pitch);
  //Convert nv12 to RGBA to apply algo based on RGBA
  //Mat rgba_mat;
  //cvtColor(nv12_mat, rgba_mat, COLOR_YUV2BGRA_NV12);
  for (NvDsMetaList * l_obj = frame_meta->obj_meta_list; l_obj != NULL;
  l_obj = l_obj->next) {
    NvDsObjectMeta *obj = (NvDsObjectMeta *) l_obj->data;
    Rect bbox(obj->rect_params.left, obj->rect_params.top,
                obj->rect_params.width, obj->rect_params.height);
    Mat roi = nv12_mat(bbox);
    float angle;
    Ellipse_feature_extraction(roi,angle);  
    }
}

I tried this:

//custom function
void edgedetection(guint height,guint width,NvBufSurface *surface,NvDsFrameMeta *frame_meta,float * angle){
  Mat nv12_mat = Mat(height*3/2, width, CV_8UC1, surface->surfaceList[frame_meta->batch_id].mappedAddr.addr[0],
  surface->surfaceList[frame_meta->batch_id].pitch);
  for (NvDsMetaList * l_obj = frame_meta->obj_meta_list; l_obj != NULL;
  l_obj = l_obj->next) {
    NvDsObjectMeta *obj = (NvDsObjectMeta *) l_obj->data;
    Rect bbox(obj->rect_params.left, obj->rect_params.top,
                obj->rect_params.width, obj->rect_params.height);
    Mat imgray;
    cvtColor(nv12_mat,imgray,COLOR_YUV2GRAY_NV12);
    Mat roi = imgray(bbox);
    Mat edges;
    Canny(imgray, edges, 500, 100);

    Mat thresh;
    threshold(edges, thresh, 127, 255, THRESH_BINARY);
    std::vector<std::vector<Point>> contours;
    std::vector<Vec4i> hierarchy;
    findContours(thresh, contours, hierarchy, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);

    for (const auto&amp; cnt : contours) {
        if (cnt.size()>100) {
            RotatedRect rect = minAreaRect(cnt);
            Point2f box[4];
            rect.points(box);
            angle=rect.angle;
        }
     }
    }
}

Good news is that it works,but bad news is that the PERF drop from 60 to about 12 ,anything I can make up for this?

No. If you want to use OpenCV for processing, there will be mem-copy between the GPU and the CPU. And the algorithms of OpenCV are also processed on the CPU. These will inevitably lead to lower performance.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.