Error: Object dimensions are out of frame boundary. Object not encoded

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson AGX Xavier
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only): 4.4.1
• TensorRT Version: 7.1.3
• Issue Type( questions, new requirements, bugs): Bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I adapt the code from deepstream-image-meta-test to my pipeline to save detected objects as images but I kept getting this error:

Error: Object dimensions are out of frame boundary. Object not encoded.

My pipeline is as follow:

[camera source source-bin 0] \                                       
                              \                                           
                                --> streammuxer -> PGIE -> Tracker ->  queue -->  tiler --> osd --> videosink
                              /                                                              
[camera source source-bin 1] /                                          

I attached pgie_src_pad_buffer_probe() to pgie, "src" and osd_sink_pad_buffer_probe() to nvosd, "sink" just like in the sample app.

  1. Any idea why did I get the error above?

  2. Also, what other options do I have to save detect objects as images?

  3. What is different between using the probe functions in deepstream-image-meta-test vs. using dsexample plugin to save detect objects as images?

  1. No idea. You need to make sure that your object bbox is in the frame’s area.
  2. All codes and details are in deepstream-image-meta-test sample.
  3. probe separate encoding and saving image to two threads, dsexample do all the things together.

@Fiona.Chen,

I have follow -up question regarding the first question. Here are the code I used to do the encoding taken from deepstream-image-meta-test sample:

#define save_img FALSE // auto save image?
#define attach_user_meta TRUE

static GstPadProbeReturn
pgie_src_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer ctx)
{
  GstBuffer *buf = (GstBuffer *)info->data;
  GstMapInfo inmap = GST_MAP_INFO_INIT;
  if (!gst_buffer_map(buf, &inmap, GST_MAP_READ))
  {
    GST_ERROR("input buffer mapinfo failed");
    return GST_FLOW_ERROR;
  }
  NvBufSurface *ip_surf = (NvBufSurface *)inmap.data;
  gst_buffer_unmap(buf, &inmap);

  NvDsObjectMeta *obj_meta = NULL;
  NvDsMetaList *l_frame = NULL;
  NvDsMetaList *l_obj = NULL;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);

  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next)
  {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
    guint num_rects = 0;
    for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next)
    {
      obj_meta = (NvDsObjectMeta *)(l_obj->data);
      num_rects++;
      /* Conditions that user needs to set to encode the detected objects of
       * interest. Here, by default all the detected objects are encoded.
       * For demonstration, we will encode the first object in the frame */
      if (num_rects == 1)
      {
        NvDsObjEncUsrArgs userData = {0};
        /* To be set by user */
        userData.saveImg = save_img;
        userData.attachUsrMeta = attach_user_meta;
        /* Preset */
        userData.objNum = num_rects;
        /*Main Function Call */
        nvds_obj_enc_process(ctx, &userData, ip_surf, obj_meta, frame_meta);
      }
    }
  }
  nvds_obj_enc_finish(ctx);
  return GST_PAD_PROBE_OK;
}

[main]
 /* Lets add probe to get informed of the meta data generated, we add probe to
   * the srd pad of the pgie element, since by that time, the buffer would have
   * had got all the nvinfer metadata. */
  pgie_src_pad = gst_element_get_static_pad(pgie, "src");
  /*Creat Context for Object Encoding */
  NvDsObjEncCtxHandle obj_ctx_handle = nvds_obj_enc_create_context();
  if (!obj_ctx_handle)
  {
    g_print("Unable to create context\n");
    return EXIT_FAILURE;
  }
  if (!pgie_src_pad)
  {
    g_print("Unable to get src pad\n");
  }
  else
  {
    gst_pad_add_probe(pgie_src_pad, GST_PAD_PROBE_TYPE_BUFFER, pgie_src_pad_buffer_probe, (gpointer)obj_ctx_handle, NULL);
  }
  gst_object_unref(pgie_src_pad);
[main]

You need to make sure that your object bbox is in the frame’s area.

After narrowing it down, the error appears whenever nvds_obj_enc_process is called. I can only find the header file for this function but not its implementation so I don’t know what nvds_obj_enc_process does internally to prevent it from throwing error. In the code from deepstream-image-meta-test sample, I didn’t find the code to make “object bbox is in the frame’s area”. Do you mind elaborate on how to make object bbox in the frame’s area?

nvds_obj_enc_process can be found in https://docs.nvidia.com/metropolis/deepstream/sdk-api/object_encoder.html

You know your nvstreammux width and height, right? You can check the bbox value of the PGIE output, it is in the object meta. https://docs.nvidia.com/metropolis/deepstream/sdk-api/Meta/_NvDsObjectMeta.html, check if the coordinate exit the frame’s width and height.

Hi @Fiona.Chen,

Thank you for your reply.

I was asking about the implementation for nvds_obj_enc_process, not the interface, the link is for the interface which I can find from the header file.

Here is what fixed the issue for me, I attached the pgie_src_pad_buffer_probe probe at tracker’s “src” pad instead of the pgie’s “src” pad. I can’t explain why though.

How do I set the value for userData.objNum? Can I just let nvds_obj_enc_process set that value? What exactly does objNum refer to?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

The implementation for nvds_obj_enc_process is not open source. So only API description and sample codes of usage are available.

Can you reproduce the problem with deepstream-image-meta-test?