Multiple probe function Deepstream C

Environment:

GPU Type: RTX 3070Ti
Nvidia Driver Version: 535.146.02
CUDA Version: 12.2
Operating System + Version: Ubuntu 22.04

@yuweiw @Amycao @yingliu @pshin @fanzh

I am working on encoding images based on the image-meta-test example.

Currently, I am saving the first object detected along with the bounding box in the frame whereas other bounding boxes will not appear.

But in this case, the inference video will also be suppressed as I have put some conditions.

How Can I get a full inference video running on display along with my condition where only save the first object detected with the bounding box?

Should I add another probe function or is there anything else to be done?
Please help.

This is the modified code and outputs.
Thank You

image-meta-test.txt (21.6 KB)

@yuweiw @Amycao @yingliu @pshin @fanzh

Should I use the tee element for my application?
One for full inference video running and another for saving frames with single bounding images without disturbing the inference video.

You can filter the bboxes you want and make the others transparent.Please refer to the border_color parameter in the NvOSD_RectParams structure.

static GstPadProbeReturn
osd_src_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info, gpointer ctx)
{
  GstBuffer *buf = (GstBuffer *) info->data;
  GstMapInfo inmap = GST_MAP_INFO_INIT;
  if (!gst_buffer_map (buf, &inmap, GST_MAP_READ)) {
    GST_ERROR ("input buffer mapinfo failed");
    return GST_PAD_PROBE_DROP;
  }
  NvBufSurface *ip_surf = (NvBufSurface *) inmap.data;
  gst_buffer_unmap (buf, &inmap);

  NvDsObjectMeta *obj_meta = NULL;
  guint vehicle_count = 0;
  guint person_count = 0;
  NvDsMetaList *l_frame = NULL;
  NvDsMetaList *l_obj = NULL;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
  const gchar *calc_enc_str = g_getenv ("CALCULATE_ENCODE_TIME");
  gboolean calc_enc = !g_strcmp0 (calc_enc_str, "yes");

  for (l_frame = batch_meta->frame_meta_list; l_frame;l_frame = l_frame->next) {
      NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
       NvDsObjectMeta *obj_meta = (NvDsObjectMeta *) l_obj->data;
      
      NvDsObjEncUsrArgs frameData = { 0 };
      
      /* Preset */
      frameData.isFrame = 1;
      /* To be set by user */
      frameData.saveImg = TRUE;
      sprintf(frameData.fileNameImg,"/opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/image_save/pics/%d.jpg",frame_count);
      frameData.attachUsrMeta = TRUE;
      /* Set if Image scaling Required */
      frameData.scaleImg = FALSE;
      frameData.scaledWidth = 0;
      frameData.scaledHeight = 0;
      obj_meta->rect_params.width= frame_meta->source_frame_width;
      obj_meta->rect_params.height= frame_meta->source_frame_height;
      obj_meta->rect_params.top= 0.0f;
      obj_meta->rect_params.left= 0.0f;
      obj_meta->rect_params.border_width = 0;
      /* Quality */
      frameData.quality = 80;
      /* Set to calculate time taken to encode JPG image. */
      if (calc_enc) {
        frameData.calcEncodeTime = 1;
      }
      /* Main Function Call */
     nvds_obj_enc_process (ctx, &frameData, ip_surf, obj_meta, frame_meta);
  }
  nvds_obj_enc_finish (ctx);
  frame_count++;
  return GST_PAD_PROBE_OK;
}

The app won’t run if I try to change in obj_meta in this function. @yuweiw

What do you mean The app won’t run? If there are some error prints, you can attach the log.

While running the app, the following is taking place.@yuweiw

It’s a crash issue from your recorded video. You can use the gdb tool to initially analyze the cause of the problem.

$gdb <your app>
$r <parameters of the command>
After the crash:
$bt

Hello @yuweiw
Here are the results when I ran the debugger.


It seems like crashing in the start_thread from you picture.
You can figure out by yourself what’s wrong in your code by commenting out that bit by bit that you’ve added yourself.

Hello @yuweiw

I want to do the same thing that is happening in deepstream_imagedata-multistream.py
Where the single object is detected and saved as an image using OpenCV in python.

Can I replicate that in deepstream C using nvds_obj_enc( ) ?

You can refer to our demo code:sources\apps\sample_apps\deepstream-image-meta-test.

Yes I working on the same @yuweiw

It is encoding all the detected objects and frames.

But how to encode object with bounding box with full frame ?

You just need to port the code we provided in your previous topic 279003.

Yes with the help of that code and modifying the sinkpad probe I am able to save the single object with bounding box in full frame.

But it’s affecting the OSD.

So I am asking you is there anyway without affecting the OSD to save the images.

What do you mean it’s affecting the OSD?

nvds_obj_enc is encoding the images based on nvosd i.e. What I am changing in osd_sink_pad_probe().

In osd sink pad probe I have put a condition to display only the first vehicle detected.

So the images saved are only the single vehicle detected with full frame with bounding box. (I have attached the image in my first post)

I want the display to detect all the objects normally but the images to be saved should be a single object with a bounding box with a full frame.

So, I mentioned to you the deepstream_imagedata_multistream.py
There the display is not touched but images are saved with a single object with the bounding box in full frame.

My OSD sink pad probe function is as follows:

static GstPadProbeReturn
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
  GstBuffer *buf = (GstBuffer *) info->data;
  NvDsFrameMeta *frame_meta = NULL;
  NvOSD_TextParams *txt_params = NULL;
  NvOSD_TextParams *txt_params1 = NULL;
  guint vehicle_count = 0;
  guint person_count = 0;
  gboolean is_first_object;
  NvDsMetaList *l_frame, *l_obj;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
  NvDsDisplayMeta *display_meta = NULL;

  if (!batch_meta) {
    // No batch meta attached.
    return GST_PAD_PROBE_OK;
  }

  for (l_frame = batch_meta->frame_meta_list; l_frame; l_frame = l_frame->next) {
    frame_meta = (NvDsFrameMeta *) l_frame->data;

    if (frame_meta == NULL) {
      // Ignore Null frame meta.
      continue;
    }

    is_first_object = FALSE;

    for (l_obj = frame_meta->obj_meta_list; l_obj; l_obj = l_obj->next) {
      NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
      NvDsObjectMeta *obj_meta = (NvDsObjectMeta *) l_obj->data;
      
	int offset = 0;

      if (obj_meta == NULL) {
        // Ignore Null object.
        continue;
      }
      txt_params = &(obj_meta->text_params);
      if (txt_params->display_text)
        g_free (txt_params->display_text);

      txt_params->display_text = (char *)g_malloc0 (MAX_DISPLAY_LEN);

      if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE)  {
        vehicle_count++; 

        if(vehicle_count == 1)
        {
        g_snprintf (txt_params->display_text, MAX_DISPLAY_LEN, "%s ", pgie_classes_str[obj_meta->class_id]);  
        obj_meta->rect_params.border_color.red = 0;
        obj_meta->rect_params.border_color.green = 1.0;
        obj_meta->rect_params.border_color.blue = 0;

        }

        else{

      obj_meta->rect_params.border_width = 0;
      txt_params->font_params.font_size = 0;
      txt_params->font_params.font_color.red = 0;
      txt_params->font_params.font_color.green = 0;
      txt_params->font_params.font_color.blue = 0;
      txt_params->font_params.font_color.alpha = 0;

      txt_params->set_bg_clr = 0;
      txt_params->text_bg_clr.red = 0.0;
      txt_params->text_bg_clr.green = 0.0;
      txt_params->text_bg_clr.blue = 0.0;
      txt_params->text_bg_clr.alpha = 0;

        }

    }

      if (obj_meta->class_id == PGIE_CLASS_ID_PERSON)
        person_count++;

      obj_meta->rect_params.border_width = 1;
      txt_params->x_offset = obj_meta->rect_params.left;
      txt_params->y_offset = obj_meta->rect_params.top - 25;

      /* Font , font-color and font-size */
      txt_params->font_params.font_name = (char *) "Arial";
      txt_params->font_params.font_size = 15;
      txt_params->font_params.font_color.red = 1.0;
      txt_params->font_params.font_color.green = 1.0;
      txt_params->font_params.font_color.blue = 1.0;
      txt_params->font_params.font_color.alpha = 1.0;

      /* Text background color */
      txt_params->set_bg_clr = 1;
      txt_params->text_bg_clr.red = 0.0;
      txt_params->text_bg_clr.green = 0.0;
      txt_params->text_bg_clr.blue = 0.0;
      txt_params->text_bg_clr.alpha = 0.5;

         NvDsUserMeta *user_event_meta =
            nvds_acquire_user_meta_from_pool (batch_meta);
        if (user_event_meta) {

          nvds_add_user_meta_to_frame (frame_meta, user_event_meta);
        } else {
          g_print ("Error in attaching event meta to buffer\n");
        }
        is_first_object = TRUE;
      
    }
  }

  g_print ("Frame Number = %d " "Vehicle Count = %d Person Count = %d\n", frame_number, vehicle_count, person_count);
  frame_number++;
return GST_PAD_PROBE_OK;
}

I have attached the full code in my first post.

Please help me to figure out this @yuweiw
Thank You

We do not support this feature. If you want to do this, you’ll need to save the image yourself and draw the bbox yourself in the saved image.

How to get bounding box info to draw using OpenCV?

Is there any function to get bounding box info?

What if you use NvDsDisplayMeta’s rect_params to get bbox info in your probe function. Then you draw the bbox on the first object and then just break out of the loop.

https://docs.nvidia.com/metropolis/deepstream/5.0/dev-guide/DeepStream_Development_Guide/baggage/structNvDsDisplayMeta.html

Ok Thank You
By any chance can I have two sink_pad _probe functions and link them with the src_pad_probe function?