How to display obj.mask_width and obj.mask_height ? over the bounding box

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) nano
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version8.6.2.3-1+cuda12.2
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA-SMI 540.2.0
• Issue Type( questions, new requirements, bugs)

How to print obj.mask_width and obj.mask_height ? over the bounding box ? basically where is function for printing in NvDsInferParseYoloSeg ?

Code :

include

include “nvdsinfer_custom_impl.h”

include “utils.h”

define NMS_THRESH 0.45;

extern “C” bool
NvDsInferParseYoloSeg(std::vector const& outputLayersInfo, NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams, std::vector& objectList);

static void
addSegProposal(const float* masks, const uint& maskWidth, const uint& maskHeight, const uint& b,
NvDsInferInstanceMaskInfo& obj)
{
obj.mask = new float[maskHeight * maskWidth];
obj.mask_width = maskWidth;
obj.mask_height = maskHeight;
obj.mask_size = sizeof(float) * maskHeight * maskWidth;

const float* data = reinterpret_cast<const float*>(masks + b * maskHeight * maskWidth);
memcpy(obj.mask, data, sizeof(float) * maskHeight * maskWidth);
}

static void
addBBoxProposal(const float& bx1, const float& by1, const float& bx2, const float& by2, const uint& netW, const uint& netH,
const int& maxIndex, const float& maxProb, NvDsInferInstanceMaskInfo& obj)
{
float x1 = clamp(bx1, 0, netW);
float y1 = clamp(by1, 0, netH);
float x2 = clamp(bx2, 0, netW);
float y2 = clamp(by2, 0, netH);

obj.left = x1;
obj.width = clamp(x2 - x1, 0, netW);
obj.top = y1;
obj.height = clamp(y2 - y1, 0, netH);

if (obj.width < 1 || obj.height < 1) {
return;
}

obj.detectionConfidence = maxProb;
obj.classId = maxIndex;

std::cout << "Mask dimensions: width = " << obj.mask_width << ", height = " << obj.mask_height << std::endl;
}

static std::vector
decodeTensorYoloSeg(const float* boxes, const float* scores, const float* classes, const float* masks,
const uint& outputSize, const uint& maskWidth, const uint& maskHeight, const uint& netW, const uint& netH,
const std::vector& preclusterThreshold)
{
std::vector objects;

for (uint b = 0; b < outputSize; ++b) {
float maxProb = scores[b];
int maxIndex = (int) classes[b];

if (maxProb < preclusterThreshold[maxIndex]) {
  continue;
}

float bx1 = boxes[b * 4 + 0];
float by1 = boxes[b * 4 + 1];
float bx2 = boxes[b * 4 + 2];
float by2 = boxes[b * 4 + 3];

NvDsInferInstanceMaskInfo obj;

addBBoxProposal(bx1, by1, bx2, by2, netW, netH, maxIndex, maxProb, obj);
addSegProposal(masks, maskWidth, maskHeight, b, obj);

objects.push_back(obj);

}

return objects;
}

static bool
NvDsInferParseCustomYoloSeg(std::vector const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo, NvDsInferParseDetectionParams const& detectionParams,
std::vector& objectList)
{
if (outputLayersInfo.empty()) {
std::cerr << “ERROR: Could not find output layer in bbox parsing” << std::endl;
return false;
}

const NvDsInferLayerInfo& boxes = outputLayersInfo[0];
const NvDsInferLayerInfo& scores = outputLayersInfo[1];
const NvDsInferLayerInfo& classes = outputLayersInfo[2];
const NvDsInferLayerInfo& masks = outputLayersInfo[3];

const uint outputSize = boxes.inferDims.d[0];
const uint maskWidth = masks.inferDims.d[2];
const uint maskHeight = masks.inferDims.d[1];

std::vector objects = decodeTensorYoloSeg((const float*) (boxes.buffer),
(const float*) (scores.buffer), (const float*) (classes.buffer), (const float*) (masks.buffer), outputSize, maskWidth,
maskHeight, networkInfo.width, networkInfo.height, detectionParams.perClassPreclusterThreshold);

objectList = objects;

return true;
}

extern “C” bool
NvDsInferParseYoloSeg(std::vector const& outputLayersInfo, NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams, std::vector& objectList)
{
return NvDsInferParseCustomYoloSeg(outputLayersInfo, networkInfo, detectionParams, objectList);
}

CHECK_CUSTOM_INSTANCE_MASK_PARSE_FUNC_PROTOTYPE(NvDsInferParseYoloSeg);

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

You can add that in the probe function like osd_sink_pad_buffer_probe in the sources\apps\sample_apps\deepstream-test1\deepstream_test1_app.c. Based on the coordinates of the bbox, you can draw width and height onto the bbox. You can refer to the source code above to draw the text.

Actually, we have to make changes in deepstream_test1_app.c ? or we have export code from deepstream_test1_app.c to nvdsparseseg_Yolo.cpp ? Since, all code executed : object passed to deepstream 7.0 , so I have to make changes in files in deepstream 7.0 Right ?

If you want draw anything on the image, you need to add that to the NvDsDisplayMeta structure. So you have to add that in the code of your app.

  1. add a probe function on the src_pad of nvdsosd plugin
  2. get the coordinates of the bbox
  3. draw the text on the image based on the coordinates of the bbox

Hi, can you please show clarity in message. Where is NvDsDisplayMeta Structure file located ? is this file located in DeepStream 7.0 ? Should I have to make changes here ?

DeepStream 7.0 contain : bin, doc, install.h, lib, sample, service maker, sources

How to add ? add a probe function on the src_pad of nvdsosd plugin . Where to add ?

Original file : nvdsparseseg_Yolo.cpp

Location : DeepStream-Yolo-Seg/nvdsinfer_custom_impl_Yolo_seg/nvdsparseseg_Yolo.cpp at master · marcoslucianops/DeepStream-Yolo-Seg · GitHub

Let me take our open source deepstream_test1_app.c as an example.
The probe function: osd_sink_pad_buffer_probe
NvDsDisplayMeta : NvDsDisplayMeta *display_meta = NULL;

We recommend that you first take a brief look at DeepStream’s samples and run 1 0r 2 our demos.

It depends on which demo you are using. If you are using deepstream-test1, you have to make changes in the deepstream_test1_app.c. If you are using other demos, you have to make changes in the corresponding demo.

Based on above code : adjusted code accordingly but it give errors :

nvdsparseseg_Yolo.cpp:123:1: warning: ‘GstPadProbeReturn osd_sink_pad_buffer_probe(GstPad*, GstPadProbeInfo*, gpointer)’ defined but not used [-Wunused-function]
123 | osd_sink_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer user_data)
| ^~~~~~~~~~~~~~~~~~~~~~~~~
g++ -L/usr/local/cuda-11.4/lib64 -L/opt/nvidia/deepstream/deepstream-6.3/lib -lgstbase-1.0 -lgstvideo-1.0 -lgstapp-1.0 -lgstpbutils-1.0 -lglib-2.0 -lnvds_infer -lnvds_meta -lnvds_utils -L/usr/local/cuda-11.4/lib64 -lcudart -lcublas -lstdc++fs -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0 -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0 nvdsparseseg_Yolo.o -o nvdsparseseg_Yolo
/usr/bin/ld: /usr/lib/gcc/aarch64-linux-gnu/9/…/…/…/aarch64-linux-gnu/Scrt1.o: in function _start': (.text+0x18): undefined reference to main’
/usr/bin/ld: (.text+0x1c): undefined reference to main' /usr/bin/ld: nvdsparseseg_Yolo.o: in function addBBoxProposal(float const&, float const&, float const&, float const&, unsigned int const&, unsigned int const&, int const&, float const&, NvDsInferInstanceMaskInfo&)‘:
nvdsparseseg_Yolo.cpp:(.text+0x148): undefined reference to clamp(float, float, float)' /usr/bin/ld: nvdsparseseg_Yolo.cpp:(.text+0x170): undefined reference to clamp(float, float, float)’
/usr/bin/ld: nvdsparseseg_Yolo.cpp:(.text+0x198): undefined reference to clamp(float, float, float)' /usr/bin/ld: nvdsparseseg_Yolo.cpp:(.text+0x1c0): undefined reference to clamp(float, float, float)’
/usr/bin/ld: nvdsparseseg_Yolo.cpp:(.text+0x1f8): undefined reference to clamp(float, float, float)' /usr/bin/ld: nvdsparseseg_Yolo.o:nvdsparseseg_Yolo.cpp:(.text+0x234): more undefined references to clamp(float, float, float)’ follow
/usr/bin/ld: nvdsparseseg_Yolo.o: in function osd_sink_pad_buffer_probe(_GstPad*, _GstPadProbeInfo*, void*)': nvdsparseseg_Yolo.cpp:(.text+0x73c): undefined reference to gst_buffer_get_nvds_batch_meta’
/usr/bin/ld: nvdsparseseg_Yolo.cpp:(.text+0x910): undefined reference to nvds_acquire_display_meta_from_pool' /usr/bin/ld: nvdsparseseg_Yolo.cpp:(.text+0x970): undefined reference to g_malloc0’
/usr/bin/ld: nvdsparseseg_Yolo.cpp:(.text+0xa18): undefined reference to g_strdup' /usr/bin/ld: nvdsparseseg_Yolo.cpp:(.text+0xa94): undefined reference to nvds_add_display_meta_to_frame’
collect2: error: ld returned 1 exit status
make: *** [: nvdsparseseg_Yolo] Error 1
make: Leaving directory ‘/home/paymentinapp/DeepStream-Yolo-Seg-master/nvdsinfer_custom_impl_Yolo_seg’

Make file :
CUDA_VER?=

ifeq ($(CUDA_VER),)

$(error “CUDA_VER is not set”)

endif

CC:= g++

NVCC:=/usr/local/cuda-$(CUDA_VER)/bin/nvcc

CFLAGS:= -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations

CFLAGS+= -I/opt/nvidia/deepstream/deepstream-6.3/sources/includes -I/usr/local/cuda-$(CUDA_VER)/include

CUFLAGS:= -I/opt/nvidia/deepstream/deepstream-6.3sources/includes -I/usr/local/cuda-$(CUDA_VER)/include

Ensure linker flags

LIBS = -lgstbase-1.0 -lgstvideo-1.0 -lgstapp-1.0 -lgstpbutils-1.0 -lglib-2.0 -lnvds_infer -lnvds_meta -lnvds_utils

LDFLAGS = -L/usr/local/cuda-11.4/lib64 -L/opt/nvidia/deepstream/deepstream-6.3/lib \

$(LIBS)

LIBS+= -L/usr/local/cuda-$(CUDA_VER)/lib64 -lcudart -lcublas -lstdc++fs

#LIBS += -L/opt/nvidia/deepstream/deepstream-6.3/lib -lnvdsgst_meta -lnvds_infer -lgstreamer-1.0 -lglib-2.0

LFLAGS:= -shared -Wl,–start-group $(LIBS) -Wl,–end-group

INCS:= $(wildcard *.h)

SRCFILES:= $(wildcard *.cpp)

SRCFILES+= $(wildcard *.cu)

# Package configuration

PKGS := gstreamer-1.0

CFLAGS += $(shell pkg-config --cflags $(PKGS))

LIBS += $(shell pkg-config --libs $(PKGS))

CFLAGS += $(shell pkg-config --cflags gstreamer-1.0)

LDFLAGS += $(shell pkg-config --libs gstreamer-1.0)

TARGET_LIB:= libnvdsinfer_custom_impl_Yolo_seg.so

TARGET_OBJS:= $(SRCFILES:.cpp=.o)

TARGET_OBJS:= $(TARGET_OBJS:.cu=.o)

all: $(TARGET_LIB)

%.o: %.cpp $(INCS) Makefile

$(CC) -c $(COMMON) -o $@ $(CFLAGS) $<

%.o: %.cu $(INCS) Makefile

$(NVCC) -c -o $@ --compiler-options ‘-fPIC’ $(CUFLAGS) $<

$(TARGET_LIB) : $(TARGET_OBJS)

$(CC) -o $@ $(TARGET_OBJS) $(LFLAGS)

$(TARGET_LIB): $(TARGET_OBJS)

$(CC) -shared -o $@ $(TARGET_OBJS) $(LFLAGS)

clean:

rm -rf $(TARGET_LIB)

rm -rf $(TARGET_OBJS)

include
include <gst/gst.h>
include “nvdsinfer_custom_impl.h”
include “utils.h”
include <glib.h>
include <stdio.h>
include “gstnvdsmeta.h”
include “nvds_yml_parser.h”

define MAX_DISPLAY_LEN 256 // Define MAX_DISPLAY_LEN

define NMS_THRESH 0.45

extern “C” bool
NvDsInferParseYoloSeg(std::vector const& outputLayersInfo, NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams, std::vector& objectList);

static void
addSegProposal(const float* masks, const uint& maskWidth, const uint& maskHeight, const uint& b,
NvDsInferInstanceMaskInfo& obj)
{
obj.mask = new float[maskHeight * maskWidth];
obj.mask_width = maskWidth;
obj.mask_height = maskHeight;
obj.mask_size = sizeof(float) * maskHeight * maskWidth;

const float* data = reinterpret_cast<const float*>(masks + b * maskHeight * maskWidth);
memcpy(obj.mask, data, sizeof(float) * maskHeight * maskWidth);

}

static void
addBBoxProposal(const float& bx1, const float& by1, const float& bx2, const float& by2, const uint& netW, const uint& netH,
const int& maxIndex, const float& maxProb, NvDsInferInstanceMaskInfo& obj)
{
float x1 = clamp(bx1, 0, netW);
float y1 = clamp(by1, 0, netH);
float x2 = clamp(bx2, 0, netW);
float y2 = clamp(by2, 0, netH);

obj.left = x1;
obj.width = clamp(x2 - x1, 0, netW);
obj.top = y1;
obj.height = clamp(y2 - y1, 0, netH);

if (obj.width < 1 || obj.height < 1) {
    return;
}

obj.detectionConfidence = maxProb;
obj.classId = maxIndex;

}

static std::vector
decodeTensorYoloSeg(const float* boxes, const float* scores, const float* classes, const float* masks,
const uint& outputSize, const uint& maskWidth, const uint& maskHeight, const uint& netW, const uint& netH,
const std::vector& preclusterThreshold)
{
std::vector objects;

for (uint b = 0; b < outputSize; ++b) {
    float maxProb = scores[b];
    int maxIndex = static_cast<int>(classes[b]);

    if (maxProb < preclusterThreshold[maxIndex]) {
        continue;
    }

    float bx1 = boxes[b * 4 + 0];
    float by1 = boxes[b * 4 + 1];
    float bx2 = boxes[b * 4 + 2];
    float by2 = boxes[b * 4 + 3];

    NvDsInferInstanceMaskInfo obj;

    addBBoxProposal(bx1, by1, bx2, by2, netW, netH, maxIndex, maxProb, obj);
    addSegProposal(masks, maskWidth, maskHeight, b, obj);

    objects.push_back(obj);
}

return objects;

}

static bool
NvDsInferParseCustomYoloSeg(std::vector const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo, NvDsInferParseDetectionParams const& detectionParams,
std::vector& objectList)
{
if (outputLayersInfo.empty()) {
std::cerr << “ERROR: Could not find output layer in bbox parsing” << std::endl;
return false;
}

const NvDsInferLayerInfo& boxes = outputLayersInfo[0];
const NvDsInferLayerInfo& scores = outputLayersInfo[1];
const NvDsInferLayerInfo& classes = outputLayersInfo[2];
const NvDsInferLayerInfo& masks = outputLayersInfo[3];

const uint outputSize = boxes.inferDims.d[0];
const uint maskWidth = masks.inferDims.d[2];
const uint maskHeight = masks.inferDims.d[1];

std::vector<NvDsInferInstanceMaskInfo> objects = decodeTensorYoloSeg((const float*) (boxes.buffer),
    (const float*) (scores.buffer), (const float*) (classes.buffer), (const float*) (masks.buffer), outputSize, maskWidth,
    maskHeight, networkInfo.width, networkInfo.height, detectionParams.perClassPreclusterThreshold);

objectList = objects;

return true;

}

extern “C” bool
NvDsInferParseYoloSeg(std::vector const& outputLayersInfo, NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams, std::vector& objectList)
{
return NvDsInferParseCustomYoloSeg(outputLayersInfo, networkInfo, detectionParams, objectList);
}

CHECK_CUSTOM_INSTANCE_MASK_PARSE_FUNC_PROTOTYPE(NvDsInferParseYoloSeg);

// Callback function for the OSD sink pad buffer probe
static GstPadProbeReturn
osd_sink_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer user_data)
{
// Check if the probe is for a buffer
if (GST_PAD_PROBE_INFO_TYPE(info) & GST_PAD_PROBE_TYPE_BUFFER) {
GstBuffer *buf = GST_PAD_PROBE_INFO_BUFFER(info);
if (!buf) {
return GST_PAD_PROBE_OK;
}

    // Extract metadata from the buffer
    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
    if (!batch_meta) {
        return GST_PAD_PROBE_OK;
    }

    // Iterate through each frame's metadata
    for (NvDsMetaList *l_frame = batch_meta->frame_meta_list; l_frame; l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)l_frame->data;

        // Iterate through each object in the frame
        for (NvDsMetaList *l_obj = frame_meta->obj_meta_list; l_obj; l_obj = l_obj->next) {
            NvDsObjectMeta *obj_meta = (NvDsObjectMeta *)l_obj->data;

            // Check if `rect_params` has valid values
            if (obj_meta->rect_params.left >= 0 && obj_meta->rect_params.top >= 0) {
                std::string mask_info_text = "Mask - Width: " + std::to_string(obj_meta->rect_params.width) +
                                             ", Height: " + std::to_string(obj_meta->rect_params.height) +
                                             ", X: " + std::to_string(obj_meta->rect_params.left) +
                                             ", Y: " + std::to_string(obj_meta->rect_params.top);

                // Allocate display meta
                NvDsDisplayMeta *display_meta = nvds_acquire_display_meta_from_pool(batch_meta);
                if (!display_meta) {
                    std::cerr << "ERROR: Unable to acquire display meta from pool" << std::endl;
                    continue;
                }

                // Initialize and set text parameters
                NvOSD_TextParams *txt_params = &display_meta->text_params[0];
                display_meta->num_labels = 1;
                txt_params->display_text = (char *)g_malloc0(MAX_DISPLAY_LEN);
                if (!txt_params->display_text) {
                    std::cerr << "ERROR: Unable to allocate memory for display text" << std::endl;
                    continue;
                }

                snprintf(txt_params->display_text, MAX_DISPLAY_LEN, "%s", mask_info_text.c_str());

                // Set the offsets where the string should appear
                txt_params->x_offset = obj_meta->rect_params.left;
                txt_params->y_offset = obj_meta->rect_params.top;

                // Font, font-color, and font-size
                txt_params->font_params.font_name = g_strdup("Serif");
                txt_params->font_params.font_size = 12;
                txt_params->font_params.font_color.red = 0.0;
                txt_params->font_params.font_color.green = 1.0;
                txt_params->font_params.font_color.blue = 0.0;
                txt_params->font_params.font_color.alpha = 1.0;

                // Text background color
                txt_params->set_bg_clr = 1;
                txt_params->text_bg_clr.red = 0.0;
                txt_params->text_bg_clr.green = 0.0;
                txt_params->text_bg_clr.blue = 0.0;
                txt_params->text_bg_clr.alpha = 1.0;

                // Add display meta to frame
                nvds_add_display_meta_to_frame(frame_meta, display_meta);
            }
        }
    }
}

return GST_PAD_PROBE_OK;

}

You cannot modify the code in the postprocess. Since you are using the DeepStream-Yolo-Seg. It’s executed with our deepstream-app demo. So you need to modify this demo code : sources\apps\sample_apps\deepstream-app. This demo is open source and you can debug that directly.

Idea yet not clear. How about chaining the code in nvdsparseseg_Yolo.cpp and generate .so file using make commond CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo_seg and execute. It must work. Right ? if we modify demo code sources\apps\sample_apps\deepstream-app we have to change which code file deepstream_app_main.c
eepstream_app_config_parser.c
deepstream_app.c deepstream_app_config_parser_yaml.cpp ?

Even after chainging overlay_graphics in /opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-app/ deepstream_app_main.c, it does not display the width, height of segmentation mask over the bounding box

static gboolean
overlay_graphics (AppCtx * appCtx, GstBuffer * buf,
NvDsBatchMeta * batch_meta, guint index)
{
int srcIndex = appCtx->active_source_index;
if (srcIndex == -1)
return TRUE;

NvDsFrameLatencyInfo *latency_info = NULL;
NvDsDisplayMeta *display_meta = nvds_acquire_display_meta_from_pool(batch_meta);

// Set up display meta for source info
display_meta->num_labels = 1;
display_meta->text_params[0].display_text = g_strdup_printf("Source: %s",
    appCtx->config.multi_source_config[srcIndex].uri);

display_meta->text_params[0].y_offset = 20;
display_meta->text_params[0].x_offset = 20;
display_meta->text_params[0].font_params.font_color = (NvOSD_ColorParams) {0, 1, 0, 1};
display_meta->text_params[0].font_params.font_size = appCtx->config.osd_config.text_size * 1.5;
display_meta->text_params[0].font_params.font_name = "Serif";
display_meta->text_params[0].set_bg_clr = 1;
display_meta->text_params[0].text_bg_clr = (NvOSD_ColorParams) {0, 0, 0, 1.0};

if (nvds_enable_latency_measurement) {
    g_mutex_lock(&appCtx->latency_lock);
    latency_info = &appCtx->latency_info[index];
    display_meta->num_labels++;
    display_meta->text_params[1].display_text = g_strdup_printf("Latency: %lf",
        latency_info->latency);
    g_mutex_unlock(&appCtx->latency_lock);

    display_meta->text_params[1].y_offset = (display_meta->text_params[0].y_offset + display_meta->text_params[0].font_params.font_size + 5);
    display_meta->text_params[1].x_offset = 20;
    display_meta->text_params[1].font_params.font_color = (NvOSD_ColorParams) {0, 1, 0, 1};
    display_meta->text_params[1].font_params.font_size = appCtx->config.osd_config.text_size * 1.5;
    display_meta->text_params[1].font_params.font_name = "Arial";
    display_meta->text_params[1].set_bg_clr = 1;
    display_meta->text_params[1].text_bg_clr = (NvOSD_ColorParams) {0, 0, 0, 1.0};
}

// Add bounding box information
for (NvDsMetaList *l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = l_frame->data;
    for (NvDsMetaList *l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) {
        NvDsObjectMeta *obj = (NvDsObjectMeta *) l_obj->data;

        if (obj->unique_component_id == (gint) appCtx->config.primary_gie_config.unique_id) {
            // Add bounding box details to the display meta
            display_meta->num_labels++;
            gchar *bbox_text = g_strdup_printf("BBox: x=%.2f, y=%.2f, w=%.2f, h=%.2f",
                obj->rect_params.left, obj->rect_params.top,
                obj->rect_params.width, obj->rect_params.height);

            display_meta->text_params[display_meta->num_labels - 1].display_text = bbox_text;
            display_meta->text_params[display_meta->num_labels - 1].x_offset = 20;
            display_meta->text_params[display_meta->num_labels - 1].y_offset = (display_meta->text_params[0].y_offset + (display_meta->text_params[0].font_params.font_size + 5) * (display_meta->num_labels));
            display_meta->text_params[display_meta->num_labels - 1].font_params.font_color = (NvOSD_ColorParams) {1, 1, 1, 1}; // White color
            display_meta->text_params[display_meta->num_labels - 1].font_params.font_size = appCtx->config.osd_config.text_size * 1.5;
            display_meta->text_params[display_meta->num_labels - 1].font_params.font_name = "Arial";
            display_meta->text_params[display_meta->num_labels - 1].set_bg_clr = 1;
            display_meta->text_params[display_meta->num_labels - 1].text_bg_clr = (NvOSD_ColorParams) {0, 0, 0, 1.0};
        }
    }
}

// Add the display meta to the frame
for (NvDsMetaList *l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = l_frame->data;
    nvds_add_display_meta_to_frame(frame_meta, display_meta);
}

return TRUE;

}

Done in changes : deepstream_app.c it does not display the width, height of segmentation mask over the bounding box

static void
process_meta (AppCtx * appCtx, NvDsBatchMeta * batch_meta)
{
// For single source always display text either with demuxer or with tiler
if (!appCtx->config.tiled_display_config.enable ||
appCtx->config.num_source_sub_bins == 1) {
appCtx->show_bbox_text = 1;
}

for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL;
     l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) l_frame->data;
    for (NvDsMetaList * l_obj = frame_meta->obj_meta_list; l_obj != NULL;
         l_obj = l_obj->next) {
        NvDsObjectMeta *obj = (NvDsObjectMeta *) l_obj->data;
        gint class_index = obj->class_id;
        NvDsGieConfig *gie_config = NULL;
        gchar *str_ins_pos = NULL;

        if (obj->unique_component_id ==
            (gint) appCtx->config.primary_gie_config.unique_id) {
            gie_config = &appCtx->config.primary_gie_config;
        } else {
            for (gint i = 0; i < (gint) appCtx->config.num_secondary_gie_sub_bins;
                 i++) {
                gie_config = &appCtx->config.secondary_gie_sub_bin_config[i];
                if (obj->unique_component_id == (gint) gie_config->unique_id) {
                    break;
                }
                gie_config = NULL;
            }
        }
        g_free (obj->text_params.display_text);
        obj->text_params.display_text = NULL;

        if (gie_config != NULL) {
            if (g_hash_table_contains(gie_config->bbox_border_color_table,
                    class_index + (gchar *) NULL)) {
                obj->rect_params.border_color = *((NvOSD_ColorParams *)
                    g_hash_table_lookup(gie_config->bbox_border_color_table,
                        class_index + (gchar *) NULL));
            } else {
                obj->rect_params.border_color = gie_config->bbox_border_color;
            }
            obj->rect_params.border_width = appCtx->config.osd_config.border_width;

            if (g_hash_table_contains(gie_config->bbox_bg_color_table,
                    class_index + (gchar *) NULL)) {
                obj->rect_params.has_bg_color = 1;
                obj->rect_params.bg_color = *((NvOSD_ColorParams *)
                    g_hash_table_lookup(gie_config->bbox_bg_color_table,
                        class_index + (gchar *) NULL));
            } else {
                obj->rect_params.has_bg_color = 0;
            }
        }

        if (!appCtx->show_bbox_text)
            continue;

        obj->text_params.x_offset = obj->rect_params.left;
        obj->text_params.y_offset = obj->rect_params.top - 30;
        obj->text_params.font_params.font_color =
            appCtx->config.osd_config.text_color;
        obj->text_params.font_params.font_size =
            appCtx->config.osd_config.text_size;
        obj->text_params.font_params.font_name = appCtx->config.osd_config.font;
        if (appCtx->config.osd_config.text_has_bg) {
            obj->text_params.set_bg_clr = 1;
            obj->text_params.text_bg_clr = appCtx->config.osd_config.text_bg_color;
        }

        // Allocate more memory for display text to accommodate additional info
        obj->text_params.display_text = (char *) g_malloc(256);
        obj->text_params.display_text[0] = '\0';
        str_ins_pos = obj->text_params.display_text;

        // Display object label
        if (obj->obj_label[0] != '\0')
            sprintf(str_ins_pos, "%s", obj->obj_label);
        str_ins_pos += strlen(str_ins_pos);

        // Display object ID if available
        if (obj->object_id != UNTRACKED_OBJECT_ID) {
            // Object ID is a 64-bit sequential value; trimming to lower 32-bits
            if (appCtx->config.tracker_config.display_tracking_id) {
                guint64 const LOW_32_MASK = 0x00000000FFFFFFFF;
                sprintf(str_ins_pos, " %lu", (obj->object_id & LOW_32_MASK));
                str_ins_pos += strlen(str_ins_pos);
            }
        }

        // Display bounding box details: x, y, width, height
        sprintf(str_ins_pos, " (x=%.2f, y=%.2f, w=%.2f, h=%.2f)",
            obj->rect_params.left,
            obj->rect_params.top,
            obj->rect_params.width,
            obj->rect_params.height);
        str_ins_pos += strlen(str_ins_pos);

        // Add classifier labels
        obj->classifier_meta_list =
            g_list_sort(obj->classifier_meta_list, component_id_compare_func);
        for (NvDsMetaList * l_class = obj->classifier_meta_list; l_class != NULL;
             l_class = l_class->next) {
            NvDsClassifierMeta *cmeta = (NvDsClassifierMeta *) l_class->data;
            for (NvDsMetaList * l_label = cmeta->label_info_list; l_label != NULL;
                 l_label = l_label->next) {
                NvDsLabelInfo *label = (NvDsLabelInfo *) l_label->data;
                if (label->pResult_label) {
                    sprintf(str_ins_pos, " %s", label->pResult_label);
                } else if (label->result_label[0] != '\0') {
                    sprintf(str_ins_pos, " %s", label->result_label);
                }
                str_ins_pos += strlen(str_ins_pos);
            }
        }
    }
}

}

/**

You can try to just call that once. Please refer our demos like deepstream_emotion_app.cpp.

Can you improve you suggestion so that can be easily understand everyone. Seem like just moving from here and their. Please explain elaborately your clear idea.

Each of the demo I’ve attached is how to use display_meta to draw something on an image.
I will explain the detailed process again. For the details of the code, please refer to all the demos I attached.
1.Get the NvDsBatchMeta from the gstbuffer.

example:
NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

2.Get the NvDsDisplayMeta from the batch_meta

example:
display_meta = nvds_acquire_display_meta_from_pool(batch_meta);

3.Define what you want to draw.

Text:
NvOSD_TextParams *txt_params  = &display_meta->text_params[0];
...define the parameters of NvOSD_TextParams...

Rectangle:
NvOSD_RectParams *rect_params =  &display_meta->rect_params[0];
...define the parameters of NvOSD_RectParams...

Drawing line, arrow, circle is basically the same process as above

4.Add the display_meta to the frame_meta

nvds_add_display_meta_to_frame(frame_meta, display_meta);

In the DeepStream-Yolo-Seg their is no .c file . Which code I should execute then ? You said It’s executed with our deepstream-app demo. So you need to modify this demo code : sources\apps\sample_apps\deepstream-app. Then question is which file I should copy ? which file I should make changes ? even consider two file nvdsparseseg_Yolo.cpp and deepstream_app_main.c, : even reflected changes as you mention, error will be continue. please read carefully comment and answer carefully ? Seem question unaddressable. Thank you for your patience.

You modified the code in the right position before, maybe the way it was used was wrong.
It’s C/C++ code. Have you run the Make and Make install command after modifying the code?
Please follow the detailed steps I attached before to modify the code.