How to spwan/create a new object in NvDsObjectMeta?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) :- dGPU
• DeepStream Version :- 7

Hello, I have a gst-plugin that gives inference using the darknet model directly without converting it to TRT engine. I have been able to integrate that into a deepstream pipeline.

I am able to get the objects detected by gst-darknet plugin in a probe function of deepstream pipeline. I want to convert that to NvDsObjectMeta objects.
how can I do that ?
can I manually spawn/create new objects and fill the frame_object_meta_list ?
or there is some other way?

can I create a new object and use nvds_add_obj_meta_to_frame() this ?

my end goal is that tracker should be able to track the objects detected by gst-darknet plugin

If nvstreammux is not used, NvDsBatchMeta (NVDS_BATCH_GST_META) cannot be created, and NvDsFrameMeta/NvDsObjectMeta cannot be added to the frame_meta_list/ obj_meta_list linked list.

Without batchmeta, it is almost impossible to use any features provided by deepstream

NvStreammux is used.

nvstreammux can usually only be used with nvinfer, how did you use nvstreammux with gst-darknet?
Can you share the pipeline?

You can refer to the following sample to create and add NvDsObjectMeta

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/deepstream_infer_tensor_meta_test.cpp

NvDsObjectMeta *obj_meta =
              nvds_acquire_obj_meta_from_pool (batch_meta);
......
nvds_add_obj_meta_to_frame (frame_meta, obj_meta, NULL);

pipeline looks like elements_before_streammux -> streammux -> nvvideoconvert -> caps_filter (RGB) -> gst-darknet -> queue -> sink

This does not seem to be the best solution. If you want to use yolov3.weights, there is a relevant sample in DS-6.2.

We also provide onnx models and sample code for yolov3/yolov4

We dont want to use TRT engine because of the accuracy loss in converting darknet -> onnx -> TRT and this is for already deployed legacy hardware.

static GstPadProbeReturn tiler_src_pad_buffer_probe(GstPad *pad,
                                                    GstPadProbeInfo *info,
                                                    gpointer u_data) {
    GstBuffer *buf = (GstBuffer *)info->data;
    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
    NvDsFrameMeta *frame_meta = nvds_acquire_frame_meta_from_pool(batch_meta);
    if (!frame_meta) {
        //log and exit 
    }
    static const GstMetaInfo *metainfo = NULL;
    if (metainfo == NULL) {
        metainfo = gst_meta_get_info("GstDarknetMetaDetections");
    }
    GstDarknetMetaDetections *meta =
        (GstDarknetMetaDetections *)gst_buffer_get_meta(buf, metainfo->api);

    // detection count is available in the detection_count variable
    g_print("detections: %d\n", meta->detection_count);
    
    // detections are available in the detections variable
    for (guint i = 0; i < meta->detection_count; i++) {
        GstDarknetMetaDetection *det = &meta->detections[i];
        g_print("* prob=%.2f%% box=[%u %u %u %u] class=%d\n", det->probability,
                det->xmin, det->ymin, det->xmax, det->ymax, det->classid);
        
        // try to spawn a new object here
        NvDsObjectMeta *obj_meta = nvds_acquire_obj_meta_from_pool(batch_meta);

        obj_meta->unique_component_id = 0;  // TODO : Figure out a better way to fill this field
        obj_meta->confidence = det->probability;
        obj_meta->object_id =  -1;  // This is an untracked object, setting object_id to -1.
        obj_meta->class_id = det->classid;

        NvOSD_RectParams &rect_params = obj_meta->rect_params;
        NvOSD_TextParams &text_params = obj_meta->text_params;

        /* Assign bounding box coordinates. These can be overwritten if tracker
         * component is present in the pipeline */
        rect_params.left = det->xmin;
        rect_params.top = det->ymin;
        rect_params.width = det->xmax;
        rect_params.height = det->ymax;

        /* Border of width 3. */
        rect_params.border_width = 3;
        rect_params.has_bg_color = 0;
        rect_params.border_color = (NvOSD_ColorParams){1, 0, 0, 1};

        /* display_text requires heap allocated memory. */
        // text_params.display_text;
        /* Display text above the left top corner of the object. */
        text_params.x_offset = rect_params.left;
        text_params.y_offset = rect_params.top - 10;
        /* Set black background for the text. */
        text_params.set_bg_clr = 1;
        text_params.text_bg_clr = (NvOSD_ColorParams){0, 0, 0, 1};
        /* Font face, size and color. */
        text_params.font_params.font_name = (gchar *)"Serif";
        text_params.font_params.font_size = 11;
        text_params.font_params.font_color = (NvOSD_ColorParams){1, 1, 1, 1};

        /* Preserve original positional bounding box coordinates of detector in
         * the frame so that those can be accessed after tracker */
        obj_meta->detector_bbox_info.org_bbox_coords.left = rect_params.left;
        obj_meta->detector_bbox_info.org_bbox_coords.top = rect_params.top;
        obj_meta->detector_bbox_info.org_bbox_coords.width = rect_params.width;
        obj_meta->detector_bbox_info.org_bbox_coords.height = rect_params.height;
        
        nvds_add_obj_meta_to_frame(frame_meta, obj_meta, NULL);
    }
    // nvds_release_meta_lock(batch_meta);

    return GST_PAD_PROBE_OK;
}```

You cannot create framet_meta by yourself. It is created by nvstreammux when it forms a batch. You need to get it as follows

for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) l_frame->data;

okay.

I updated the code as you suggested.

static GstPadProbeReturn tiler_src_pad_buffer_probe(GstPad *pad,
                                                    GstPadProbeInfo *info,
                                                    gpointer u_data) {
    GstBuffer *buf = (GstBuffer *)info->data;
    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);

    static const GstMetaInfo *metainfo = NULL;
    if (metainfo == NULL) {
        metainfo = gst_meta_get_info("GstDarknetMetaDetections");
    }
    GstDarknetMetaDetections *meta =
        (GstDarknetMetaDetections *)gst_buffer_get_meta(buf, metainfo->api);

    // detection count is available in the detection_count variable
    g_print("detections: %d\n", meta->detection_count);
    for (NvDsMetaList *l_frame = batch_meta->frame_meta_list; l_frame != NULL;
         l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)l_frame->data;
        // detections are available in the detections variable
        for (guint i = 0; i < meta->detection_count; i++) {
            GstDarknetMetaDetection *det = &meta->detections[i];
            g_print("* prob=%.2f%% box=[%u %u %u %u] class=%d\n",
                    det->probability, det->xmin, det->ymin, det->xmax,
                    det->ymax, det->classid);

            // try to spawn a new object here
            NvDsObjectMeta *obj_meta =
                nvds_acquire_obj_meta_from_pool(batch_meta);

            obj_meta->unique_component_id =
                0;  // TODO : Figure out a better way to fill this field
            obj_meta->confidence = det->probability;
            obj_meta->object_id =
                -1;  // This is an untracked object, setting object_id to -1.
            obj_meta->class_id = det->classid;

            NvOSD_RectParams &rect_params = obj_meta->rect_params;
            NvOSD_TextParams &text_params = obj_meta->text_params;

            /* Assign bounding box coordinates. These can be overwritten if
             * tracker component is present in the pipeline */
            rect_params.left = det->xmin;
            rect_params.top = det->ymin;
            rect_params.width = det->xmax;
            rect_params.height = det->ymax;

            /* Border of width 3. */
            rect_params.border_width = 3;
            rect_params.has_bg_color = 0;
            rect_params.border_color = (NvOSD_ColorParams){1, 0, 0, 1};

            /* display_text requires heap allocated memory. */
            // text_params.display_text;
            /* Display text above the left top corner of the object. */
            text_params.x_offset = rect_params.left;
            text_params.y_offset = rect_params.top - 10;
            /* Set black background for the text. */
            text_params.set_bg_clr = 1;
            text_params.text_bg_clr = (NvOSD_ColorParams){0, 0, 0, 1};
            /* Font face, size and color. */
            text_params.font_params.font_name = (gchar *)"Serif";
            text_params.font_params.font_size = 11;
            text_params.font_params.font_color =
                (NvOSD_ColorParams){1, 1, 1, 1};

            /* Preserve original positional bounding box coordinates of detector
             * in the frame so that those can be accessed after tracker */
            obj_meta->detector_bbox_info.org_bbox_coords.left =
                rect_params.left;
            obj_meta->detector_bbox_info.org_bbox_coords.top = rect_params.top;
            obj_meta->detector_bbox_info.org_bbox_coords.width =
                rect_params.width;
            obj_meta->detector_bbox_info.org_bbox_coords.height =
                rect_params.height;

            nvds_add_obj_meta_to_frame(frame_meta, obj_meta, NULL);
        }
    }
    // nvds_release_meta_lock(batch_meta);

    return GST_PAD_PROBE_OK;
}

after this, when I start the pipeline, It goes to the playing state, window of nveglglessink opens and then it freezes to a black screen.