Nvds_obj_enc_process Gives "Error: Object dimensions are greater than frame dimensions. Object not encoded."

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
6.0
• TensorRT Version
8.0.1.6
• NVIDIA GPU Driver Version (valid for GPU only)
470.86
• Issue Type( questions, new requirements, bugs)
Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Really I’d like to understand why the “solution” here works: How to get surface index when batch size bigger 1

In my case, if I use a single stream (mux and infer batch-size each of 1), everything works great. However, if I add another stream and change each of those values to 2, I get the error in the title.

But, if I do what the person in the link above did and up the muxer’s batch-size to 4, the issue goes away.

Why is this?

(Also, I’m using deepstream-app (the reference app) to do this. I make use of nvds_obj_enc_process as part of the pipeline.)

Sorry for the late.
Will try to reproduce your issue and get back to you.

Thank you. I’d be interested to know what you find out, as I can confirm I still have this issue with 6.0.1.

I’m using two streams, and the plugins I’m using where I can specify batch-size, I have a value of two (in the streammux and the primary-gie). I have a tracker enabled and it has batch processing on.

I’m happy to provide any other configuration settings you might want to see.

Don’t use DS6 and 6.0.1, a lot of bugs and they didn’t want to help you when you really stuck at such issues.

Can you share how to reproduce it? DS sample or share your code to us, so we can have a check in my side.

  • I’m using the “deepstream-app” reference application. The configuration uses PGIE, a NvDCF tracker, two live RTSP streams, and all batch sizes are set to 2.
  • I’ve modified the DSExample plugin (the “optimized” version), but have only changed the code in the “gst_dsexample_output_loop” function, and is configured to process “full frame” (I believe this is most of the code as I’ve trimmed out unrelated parts):
static gpointer
gst_dsexample_output_loop (gpointer data)
{
  GstDsExample *dsexample = GST_DSEXAMPLE (data);
  DsExampleOutput *output;
  NvDsObjectMeta *obj_meta = NULL;
  gdouble scale_ratio = 1.0;

  nvtxEventAttributes_t eventAttrib = {0};
  eventAttrib.version = NVTX_VERSION;
  eventAttrib.size = NVTX_EVENT_ATTRIB_STRUCT_SIZE;
  eventAttrib.colorType = NVTX_COLOR_ARGB;
  eventAttrib.color = 0xFFFF0000;
  eventAttrib.messageType = NVTX_MESSAGE_TYPE_ASCII;
  std::string nvtx_str;

  NvDsObjEncCtxHandle obj_ctx_handle = nvds_obj_enc_create_context ();
  if (!obj_ctx_handle) {
      g_print ("Unable to create context\n");
      return nullptr;
  }

  nvtx_str =
      "gst-dsexample_output-loop_uid=" + std::to_string (dsexample->unique_id);

  g_mutex_lock (&dsexample->process_lock);

  /* Run till signalled to stop. */
  while (!dsexample->stop) {
    std::unique_ptr < GstDsExampleBatch > batch = nullptr;

    /* Wait if processing queue is empty. */
    if (g_queue_is_empty (dsexample->process_queue)) {
      g_cond_wait (&dsexample->process_cond, &dsexample->process_lock);
      continue;
    }

    /* Pop a batch from the element's process queue. */
    batch.reset ((GstDsExampleBatch *)
        g_queue_pop_head (dsexample->process_queue));
    g_cond_broadcast (&dsexample->process_cond);

    /* Event marker used for synchronization. No need to process further. */
    if (batch->event_marker) {
      continue;
    }

    g_mutex_unlock (&dsexample->process_lock);

    /* Need to only push buffer to downstream element. This batch was not
     * actually submitted for inferencing. */
    if (batch->push_buffer) {
      nvtxDomainRangeEnd(dsexample->nvtx_domain, batch->nvtx_complete_buf_range);

      nvds_set_output_system_timestamp (batch->inbuf,
          GST_ELEMENT_NAME (dsexample));

      GstFlowReturn flow_ret =
          gst_pad_push (GST_BASE_TRANSFORM_SRC_PAD (dsexample),
          batch->inbuf);
      if (dsexample->last_flow_ret != flow_ret) {
        switch (flow_ret) {
            /* Signal the application for pad push errors by posting a error message
             * on the pipeline bus. */
          case GST_FLOW_ERROR:
          case GST_FLOW_NOT_LINKED:
          case GST_FLOW_NOT_NEGOTIATED:
            GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
                ("Internal data stream error."),
                ("streaming stopped, reason %s (%d)",
                    gst_flow_get_name (flow_ret), flow_ret));
            break;
          default:
            break;
        }
      }
      dsexample->last_flow_ret = flow_ret;
      g_mutex_lock (&dsexample->process_lock);
      continue;
    }

    nvtx_str = "dequeueOutputAndAttachMeta batch_num=" + std::to_string(batch->inbuf_batch_num);
    eventAttrib.message.ascii = nvtx_str.c_str();
    nvtxDomainRangePushEx(dsexample->nvtx_domain, &eventAttrib);

    /* For each frame attach metadata output. */
    for (guint i = 0; i < batch->frames.size (); i++) {
      if (dsexample->process_full_frame) {
        NvDsFrameMeta *frame_meta = batch->frames[i].frame_meta;

        GList *elem = g_list_first(frame_meta->obj_meta_list);
        int frame_obj_count = 0;

        Point p;

        while (elem) {
                NvDsObjectMeta *obj_meta = (NvDsObjectMeta*) elem->data;

                std::string disp_text;
                disp_text = obj_meta->text_params.display_text;

                g_mutex_lock (&dsexample->process_lock);

		// condition unimportant here
                if (TRUE) {
                                NvDsObjEncUsrArgs userData = { 0 };
                                /* To be set by user */
                                userData.saveImg = FALSE;
                                userData.attachUsrMeta = TRUE;
                                /* Set if Image scaling Required */
                                userData.scaleImg = FALSE;
                                userData.scaledWidth = 0;
                                userData.scaledHeight = 0;
                                /* Preset */
                                userData.objNum = frame_obj_count;
                                /* Quality */
                                userData.quality = 80;
                                /*Main Function Call */
                                //nvds_obj_enc_process (ctx, &userData, ip_surf, obj_meta, frame_meta);
                                nvds_obj_enc_process (obj_ctx_handle, &userData, dsexample->inter_buf, obj_meta, frame_meta);

                        }

                }

                g_mutex_unlock (&dsexample->process_lock);

                frame_obj_count++;
                elem = g_list_next(elem);
        }

        nvds_obj_enc_finish(obj_ctx_handle);

      } else {
        GstDsExampleFrame & frame = batch->frames[i];

        obj_meta = frame.obj_meta;

        /* Should not process on objects smaller than MIN_INPUT_OBJECT_WIDTH x MIN_INPUT_OBJECT_HEIGHT
         * since it will cause hardware scaling issues. */
        if (obj_meta->rect_params.width < MIN_INPUT_OBJECT_WIDTH ||
            obj_meta->rect_params.height < MIN_INPUT_OBJECT_HEIGHT)
          continue;

        // Process the object crop to obtain label
#ifdef WITH_OPENCV
        output = DsExampleProcess (dsexample->dsexamplelib_ctx,
            batch->cvmat[i].data);
#else
        output = DsExampleProcess (dsexample->dsexamplelib_ctx,
            (unsigned char *)batch->inter_buf->surfaceList[i].mappedAddr.addr[0]);
#endif

        // Attach labels for the object
        attach_metadata_object (dsexample, obj_meta, output);

        free (output);
      }
    }

    g_mutex_lock (&dsexample->process_lock);

#ifdef WITH_OPENCV
    g_queue_push_tail (dsexample->buf_queue, batch->cvmat);
#else
    g_queue_push_tail (dsexample->buf_queue, batch->inter_buf);
#endif
    g_cond_broadcast (&dsexample->buf_cond);

    nvtxDomainRangePop (dsexample->nvtx_domain);
  }
  g_mutex_unlock (&dsexample->process_lock);

  return nullptr;
}

Sorry for the late.
I use the “solution” you mentioned in link to repro the issue.
Error: Object dimensions are greater than frame dimensions. Object not encoded.
this error caused by the user use specified surfaceList index.
ip_surf->surfaceList[frame_meta->batch_id]
user just need to pass ip_surf, not need to pass with index. in the low level nvds_obj_enc_process, it will create encode process based on the surface index.

Ok, but what about with my code above? The link I put in my original post showed that the user’s “fix” involved changing the batch size to double his original value. In my case, you can see above that I’m not using the surfaceList array at all. Why would I require my batch sizes to be double what they should be? I have two sources running, with my batch sizes set everywhere to 2 (as I described in my original post). Why would I need to set some or all of them to 4 for them to work? I have verified that increasing to 4 works, but I shouldn’t need to do that.

I believe I finally figured out what was causing this.

I was using a BufSurface that only had one surface in it, and after much digging and reading and hair-pulling, it looks as though a frame indexes into the surfaceList based on the frame’s “batch_id”, and not the “surface_index”. After switching to the proper buffer that had the necessary surfaces, I believe the issue above has gone away.

Glad to know you figure it out.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.