I draw lines in dsexample. and how to save the frame with that lines

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)GPU
**• DeepStream Version6.1
• JetPack Version (valid for Jetson only)
**• TensorRT Version8.4
**• NVIDIA GPU Driver Version (valid for GPU only) 510
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Dear professor:

   I use dsexample in deepstream-app.  In dsexample,  I draw lines by function "attach_metadata_full_frame". So, I have two question:

(1) how to save the frame with the lines, I hope save “jpg”;
(2) I have saved the regional frame without bonding box, So I hope to save the bbox too.

I checked in this forums, and found that we could change the dsexample plugin location in the pipeline. and is there any other ways?
Thank you very much.

I have an idea. but I meet a problem.

I write another plugin, it is same as dsexample. Actually, I copy the dsexample, and modify the “dsexample” into"newplugin" in all files. I successded in building “newplugin”, and I can get the result from this plugin.
Now if I enable dsexample or newplugin separately, there is no problem. But if I enable the two plugin in the same time, there has broken : free(): invalid pointer, aborted(core dumped). It is looks like the two collide.

In the configure file, the unique-id are different.

What is that problem? Is my way to save the lines right?
Thank you very much

I fond that if I change the location in the pipline of the two plugin in the deepstream-app.c . It can be work. the newplugin named “fenceplugin”, It must be ahead of dsexample.

The code about function of “create_pipeline” as below:

gboolean
create_pipeline (AppCtx * appCtx,
bbox_generated_callback bbox_generated_post_analytics_cb,
bbox_generated_callback all_bbox_generated_cb, perf_callback perf_cb,
overlay_graphics_callback overlay_graphics_cb)
{
gboolean ret = FALSE;
NvDsPipeline *pipeline = &appCtx->pipeline;
NvDsConfig *config = &appCtx->config;
GstBus *bus;
GstElement *last_elem;
GstElement *tmp_elem1;
GstElement *tmp_elem2;
guint i;
GstPad *fps_pad = NULL;
gulong latency_probe_id;

_dsmeta_quark = g_quark_from_static_string (NVDS_META_STRING);

appCtx->all_bbox_generated_cb = all_bbox_generated_cb;
appCtx->bbox_generated_post_analytics_cb = bbox_generated_post_analytics_cb;
appCtx->overlay_graphics_cb = overlay_graphics_cb;

if (config->osd_config.num_out_buffers < 8) {
config->osd_config.num_out_buffers = 8;
}

pipeline->pipeline = gst_pipeline_new (“pipeline”);
if (!pipeline->pipeline) {
NVGSTDS_ERR_MSG_V (“Failed to create pipeline”);
goto done;
}

bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline->pipeline));
pipeline->bus_id = gst_bus_add_watch (bus, bus_callback, appCtx);
gst_object_unref (bus);

if (config->file_loop) {
/* Let each source bin know it needs to loop. */
guint i;
for (i = 0; i < config->num_source_sub_bins; i++)
config->multi_source_config[i].loop = TRUE;
}

for (guint i = 0; i < config->num_sink_sub_bins; i++) {
NvDsSinkSubBinConfig sink_config = &config->sink_bin_sub_bin_config[i];
switch (sink_config->type) {
case NV_DS_SINK_FAKE:
case NV_DS_SINK_RENDER_EGL:
case NV_DS_SINK_RENDER_3D:
case NV_DS_SINK_RENDER_DRM:
/
Set the “qos” property of sink, if not explicitly specified in the
config. /
if (!sink_config->render_config.qos_value_specified) {
sink_config->render_config.qos = FALSE;
}
default:
break;
}
}
/

  • Add muxer and < N > source components to the pipeline based
  • on the settings in configuration file.
    */
    if (!create_multi_source_bin (config->num_source_sub_bins,
    config->multi_source_config, &pipeline->multi_src_bin))
    goto done;
    gst_bin_add (GST_BIN (pipeline->pipeline), pipeline->multi_src_bin.bin);

if (config->streammux_config.is_parsed){
if(!set_streammux_properties (&config->streammux_config,
pipeline->multi_src_bin.streammux)){
NVGSTDS_WARN_MSG_V(“Failed to set streammux properties”);
}
}

if(appCtx->latency_info == NULL)
{
appCtx->latency_info = (NvDsFrameLatencyInfo *)
calloc(1, config->streammux_config.batch_size *
sizeof(NvDsFrameLatencyInfo));
}

/** a tee after the tiler which shall be connected to sink(s) */
pipeline->tiler_tee = gst_element_factory_make (NVDS_ELEM_TEE, “tiler_tee”);
if (!pipeline->tiler_tee) {
NVGSTDS_ERR_MSG_V (“Failed to create element ‘tiler_tee’”);
goto done;
}
gst_bin_add (GST_BIN (pipeline->pipeline), pipeline->tiler_tee);

/** Tiler + Demux in Parallel Use-Case */
if (config->tiled_display_config.enable == NV_DS_TILED_DISPLAY_ENABLE_WITH_PARALLEL_DEMUX)
{
pipeline->demuxer =
gst_element_factory_make (NVDS_ELEM_STREAM_DEMUX, “demuxer”);
if (!pipeline->demuxer) {
NVGSTDS_ERR_MSG_V (“Failed to create element ‘demuxer’”);
goto done;
}
gst_bin_add (GST_BIN (pipeline->pipeline), pipeline->demuxer);

/** NOTE:
 * demux output is supported for only one source
 * If multiple [sink] groups are configured with
 * link_to_demux=1, only the first [sink]
 * shall be constructed for all occurences of
 * [sink] groups with link_to_demux=1
 */
{
  gchar pad_name[16];
  GstPad *demux_src_pad;

  i = 0;
  if (!create_demux_pipeline (appCtx, i)) {
    goto done;
  }

  for (i=0; i < config->num_sink_sub_bins; i++)
  {
    if (config->sink_bin_sub_bin_config[i].link_to_demux == TRUE)
    {
      g_snprintf (pad_name, 16, "src_%02d", config->sink_bin_sub_bin_config[i].source_id);
      break;
    }
  }

  if (i >= config->num_sink_sub_bins)
  {
    g_print ("\n\nError : sink for demux (use link-to-demux-only property) is not provided in the config file\n\n");
    goto done;
  }

  i = 0;

  gst_bin_add (GST_BIN (pipeline->pipeline),
      pipeline->demux_instance_bins[i].bin);

  demux_src_pad = gst_element_get_request_pad (pipeline->demuxer, pad_name);
  NVGSTDS_LINK_ELEMENT_FULL (pipeline->demuxer, pad_name,
      pipeline->demux_instance_bins[i].bin, "sink");
  gst_object_unref (demux_src_pad);

  NVGSTDS_ELEM_ADD_PROBE(latency_probe_id,
      appCtx->pipeline.demux_instance_bins[i].demux_sink_bin.bin,
      "sink",
      demux_latency_measurement_buf_prob, GST_PAD_PROBE_TYPE_BUFFER,
      appCtx);
  latency_probe_id = latency_probe_id;
}

last_elem = pipeline->demuxer;
link_element_to_tee_src_pad (pipeline->tiler_tee, last_elem);
last_elem = pipeline->tiler_tee;

}

if (config->tiled_display_config.enable) {

/* Tiler will generate a single composited buffer for all sources. So need
 * to create only one processing instance. */
if (!create_processing_instance (appCtx, 0)) {
  goto done;
}
// create and add tiling component to pipeline.
if (config->tiled_display_config.columns *
    config->tiled_display_config.rows < config->num_source_sub_bins) {
  if (config->tiled_display_config.columns == 0) {
    config->tiled_display_config.columns =
        (guint) (sqrt (config->num_source_sub_bins) + 0.5);
  }
  config->tiled_display_config.rows =
      (guint) ceil (1.0 * config->num_source_sub_bins /
      config->tiled_display_config.columns);
  NVGSTDS_WARN_MSG_V
      ("Num of Tiles less than number of sources, readjusting to "
      "%u rows, %u columns", config->tiled_display_config.rows,
      config->tiled_display_config.columns);
}

gst_bin_add (GST_BIN (pipeline->pipeline), pipeline->instance_bins[0].bin);
last_elem = pipeline->instance_bins[0].bin;

if (!create_tiled_display_bin (&config->tiled_display_config,
        &pipeline->tiled_display_bin)) {
  goto done;
}
gst_bin_add (GST_BIN (pipeline->pipeline), pipeline->tiled_display_bin.bin);
NVGSTDS_LINK_ELEMENT (pipeline->tiled_display_bin.bin, last_elem);
last_elem = pipeline->tiled_display_bin.bin;

link_element_to_tee_src_pad (pipeline->tiler_tee, pipeline->tiled_display_bin.bin);
last_elem = pipeline->tiler_tee;

NVGSTDS_ELEM_ADD_PROBE (latency_probe_id,
  pipeline->instance_bins->sink_bin.sub_bins[0].sink, "sink",
  latency_measurement_buf_prob, GST_PAD_PROBE_TYPE_BUFFER,
  appCtx);
latency_probe_id = latency_probe_id;

}
else
{
/*
* Create demuxer only if tiled display is disabled.
*/
pipeline->demuxer =
gst_element_factory_make (NVDS_ELEM_STREAM_DEMUX, “demuxer”);
if (!pipeline->demuxer) {
NVGSTDS_ERR_MSG_V (“Failed to create element ‘demuxer’”);
goto done;
}
gst_bin_add (GST_BIN (pipeline->pipeline), pipeline->demuxer);

for (i = 0; i < config->num_source_sub_bins; i++)
{
  gchar pad_name[16];
  GstPad *demux_src_pad;

  /* Check if any sink has been configured to render/encode output for
   * source index `i`. The processing instance for that source will be
   * created only if atleast one sink has been configured as such.
   */
  if (!is_sink_available_for_source_id(config, i))
    continue;

  if (!create_processing_instance(appCtx, i))
  {
    goto done;
  }
  gst_bin_add(GST_BIN(pipeline->pipeline),
              pipeline->instance_bins[i].bin);

  g_snprintf(pad_name, 16, "src_%02d", i);
  demux_src_pad = gst_element_get_request_pad(pipeline->demuxer, pad_name);
  NVGSTDS_LINK_ELEMENT_FULL(pipeline->demuxer, pad_name,
                            pipeline->instance_bins[i].bin, "sink");
  gst_object_unref(demux_src_pad);

  for (int k = 0; k < MAX_SINK_BINS;k++) {
    if(pipeline->instance_bins[i].sink_bin.sub_bins[k].sink){
      NVGSTDS_ELEM_ADD_PROBE(latency_probe_id,
          pipeline->instance_bins[i].sink_bin.sub_bins[k].sink, "sink",
          latency_measurement_buf_prob, GST_PAD_PROBE_TYPE_BUFFER,
          appCtx);
      break;
    }
  }
  latency_probe_id = latency_probe_id;
}
last_elem = pipeline->demuxer;

}

if (config->tiled_display_config.enable == NV_DS_TILED_DISPLAY_DISABLE) {
fps_pad = gst_element_get_static_pad (pipeline->demuxer, “sink”);
}
else {
fps_pad = gst_element_get_static_pad (pipeline->tiled_display_bin.bin, “sink”);
}

pipeline->common_elements.appCtx = appCtx;
// Decide where in the pipeline the element should be added and add only if
// enabled

//------------------ I added here -------//
if (config->fenceplugin_config.enable)
{
// Create fenceplugin element bin and set properties
if (!create_fenceplugin_bin (&config->fenceplugin_config,
&pipeline->fenceplugin_bin)) {
goto done;
}
// Add fenceplugin bin to instance bin
gst_bin_add (GST_BIN (pipeline->pipeline), pipeline->fenceplugin_bin.bin);

// Link this bin to the last element in the bin
NVGSTDS_LINK_ELEMENT (pipeline->fenceplugin_bin.bin, last_elem);

// Set this bin as the last element
last_elem = pipeline->fenceplugin_bin.bin;

}
//---------------------------------------------//

if (config->dsexample_config.enable) {
// Create dsexample element bin and set properties
if (!create_dsexample_bin (&config->dsexample_config,
&pipeline->dsexample_bin))
{
goto done;
}
// Add dsexample bin to instance bin
gst_bin_add (GST_BIN (pipeline->pipeline), pipeline->dsexample_bin.bin);
// Link this bin to the last element in the bin
NVGSTDS_LINK_ELEMENT (pipeline->dsexample_bin.bin, last_elem);
// Set this bin as the last element
last_elem = pipeline->dsexample_bin.bin;
}

// create and add common components to pipeline.
if (!create_common_elements (config, pipeline, &tmp_elem1, &tmp_elem2,
bbox_generated_post_analytics_cb)) {
goto done;
}

if(!add_and_link_broker_sink(appCtx)) {
goto done;
}

if (tmp_elem2) {
NVGSTDS_LINK_ELEMENT (tmp_elem2, last_elem);
last_elem = tmp_elem1;
}

NVGSTDS_LINK_ELEMENT (pipeline->multi_src_bin.bin, last_elem);

// enable performance measurement and add call back function to receive
// performance data.
if (config->enable_perf_measurement) {
appCtx->perf_struct.context = appCtx;
enable_perf_measurement (&appCtx->perf_struct, fps_pad,
pipeline->multi_src_bin.num_bins,
config->perf_measurement_interval_sec,
config->multi_source_config[0].dewarper_config.num_surfaces_per_frame,
perf_cb);
}

latency_probe_id = latency_probe_id;

if (config->num_message_consumers) {
for (i = 0; i < config->num_message_consumers; i++) {
appCtx->c2d_ctx[i] = start_cloud_to_device_messaging (
&config->message_consumer_config[i], NULL,
&appCtx->pipeline.multi_src_bin);
if (appCtx->c2d_ctx[i] == NULL) {
NVGSTDS_ERR_MSG_V (“Failed to create message consumer”);
goto done;
}
}
}

GST_DEBUG_BIN_TO_DOT_FILE_WITH_TS (GST_BIN (appCtx->pipeline.pipeline),
GST_DEBUG_GRAPH_SHOW_ALL, “ds-app-null”);

g_mutex_init (&appCtx->app_lock);
g_cond_init (&appCtx->app_cond);
g_mutex_init (&appCtx->latency_lock);

ret = TRUE;
done:
if (fps_pad)
gst_object_unref (fps_pad);

if (!ret) {
NVGSTDS_ERR_MSG_V ("%s failed", func);
}
return ret;
}

Could help me that
(1)why the location is changed there is broken?
(2)Despite I have save the frame by the two plugin, but the lines can not be saved. please help me, and let me to how to move the dsexample plugin behind the “nvosd”.

Thank you very much

Hi @yangyi , What’s your pipeline like or which demo you used to customized dsexample? Could you give me full env to reproduce the issue?
Could you test it by cli first, it’s more simpler to change the plugin location. Thanks

Thank you for your response.
1: I use “deepstream-app.c”, the path is “deepstream6.1/soruces/apps/sample_apps/deepstream-app/deepstream.c”

my configure file is :

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[ds-example]
enable=0
processing-width=640
processing-height=384
full-frame=1
unique-id=20
gpu-id=0
blur-objects=0
nvbuf-memory-type=3

[fence-plugin]
enable=1
processing-width=640
processing-height=384
full-frame=0
unique-id=7
gpu-id=0
blur-objects=0
nvbuf-memory-type=3

[tiled-display]
enable=1
rows=1
columns=1
width=1920
height=1080
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
uri=file:/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream-fence-app/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink1]
enable=1
type=3
container=1
codec=1
enc-type=0
sync=0
bitrate=2000000
profile=0
output-file=out.mp4
source-id=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
buffer-pool-size=4
batch-size=4
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
model-engine-file=./Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
batch-size=1
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[tracker]
enable=1
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=config_tracker_NvDCF_perf.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[tests]
file-loop=1

2: my plugin is named “fenceplugin”
(1) I copy the dsexample folder, and renamed it as “gstfenceplugin”.

(2) I add the parameters in the relative places: /source/apps-common/include/deepstream_config.h, deepstream_config_fie_parser.h. and I add a deepstream_fenceplugin.h. The code is same with dsexample.h. I just modify the parameters like dsexample.

(3) I modify the /source/apps-common/src/deepstream_config.c, deesptream_congfi_parer.c and I add a deepstream_fenceplugin.c. The method is the same with above.

(4) I add code in deepstream_app.c , in the function :create_pipeline. And behind create element of dsexample, I add the followin code.
// Decide where in the pipeline the element should be added and add only if enabled
if (config->dsexample_config.enable)
{
// Create dsexample element bin and set properties
if (!create_dsexample_bin (&config->dsexample_config, &pipeline->dsexample_bin))
{
goto done;
}
// Add dsexample bin to instance bin
gst_bin_add (GST_BIN (pipeline->pipeline), pipeline->dsexample_bin.bin);
// Link this bin to the last element in the bin
NVGSTDS_LINK_ELEMENT (pipeline->dsexample_bin.bin, last_elem);
// Set this bin as the last element
last_elem = pipeline->dsexa mple_bin.bin;
}

  //---------------- I  added here -------//
  if (config->fenceplugin_config.enable)
  {
    // Create fenceplugin element bin and set properties
    if (!create_fenceplugin_bin (&config->fenceplugin_config, &pipeline->fenceplugin_bin))
    {
      goto done;
    }
    // Add fenceplugin bin to instance bin
    gst_bin_add (GST_BIN (pipeline->pipeline), pipeline->fenceplugin_bin.bin);
    // Link this bin to the last element in the bin
    NVGSTDS_LINK_ELEMENT (pipeline->fenceplugin_bin.bin, last_elem);
    // Set this bin as the last element
    last_elem = pipeline->fenceplugin_bin.bin;
  }
  //---------------------------------------------//

(5) I add the code in deepstream_app_config_parser.c
the same as above.

(6) I “make install” deepstream-app, that is ok

(7) I “make install” fenceplugin, that is ok.

Now I meet the following problem:
(1) If in tiled_display is enable, and the code location is right, e.c the fenceplugin must before dsexample. The two plugin can work well.
(2) But if the tiled_display is disable, just dsexample plugin can be enabled alone… the fenceplugin can not work well.

(3) I hope to know what is the problem. Thank you very much.

If just enable dsexample, like this

1.About your plugin is not compatible with the dsexample. Maybe when you copy the dsexample, there are some codes needed modified that have not modified . So the two plugin are incompatible when regist to the GObject. This is basic skills of gstreamer plugin development, please refer to gstreamer community.
2. If you want to draw line and save it, just set the params to NvDsDisplaymeta, you need not to write a new plugin. You can refer deepstream_test1_app.c->osd_sink_pad_buffer_probe
Thanks

1 Like

Thank you for your response.
You are right, I re-modify the code and the two plugin work fine.
I have another question. How to decide the order of the plugin in deepstream-app?? Could you tell me which code decide it.

Thank you very much.

Glad to hear that worked, I will set this topic to resolved status.
And the other questions, please open a new topic, I will check it on the new topic. Thanks

Thank you very much. I have done.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.