Why preprocess plugin does not run?

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**RTX4080
• DeepStream Version7.0
I am testing default preprocessing plugin.
Main config file has preprocess

secondary-preprocess0:
  config-file-path: /workspace/opt/nvidia/deepstream/deepstream-7.0/sources/gst-plugins/gst-nvdspreprocess-ava/config_preprocess.txt

Then in config_preprocess.txt, lib so file is linked as
custom-lib-path=/workspace/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_preprocess_ava.so

Then inside gstnvdspreprocess.cpp file, there are prints from

static GstFlowReturn
gst_nvdspreprocess_on_frame (GstNvDsPreProcess * nvdspreprocess, GstBuffer * inbuf,
    NvBufSurface * in_surf)

and

static GstFlowReturn
gst_nvdspreprocess_on_objects (GstNvDsPreProcess * nvdspreprocess, GstBuffer * inbuf,
    NvBufSurface * in_surf)

But they are not printed.
Why preprocess plugin does not plugin to main app?

Main config file is

source-list:
   list: file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;file:///workspace/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4;

streammux:
  width: 1920
  height: 1080
  batched-push-timeout: 40000

tracker:
  enable: 1
  ll-lib-file: /workspace/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
  ll-config-file: /workspace/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml

primary-gie:
  plugin-type: 1
  #config-file-path: ../nvinfer/peoplenet_tao/config_infer_primary_peoplenet.txt
  config-file-path: ../triton/peoplenet_tao/config_infer_primary_peoplenet.yml
  #config-file-path: ../triton-grpc/peoplenet_tao/config_infer_primary_peoplenet.yml

secondary-preprocess0:
  config-file-path: /workspace/opt/nvidia/deepstream/deepstream-7.0/sources/gst-plugins/gst-nvdspreprocess-ava/config_preprocess.txt

sink:
  #0 fakesink 
  #1 filesink generate the out.mp4 file in the current directory
  #2 rtspsink publish at rtsp://localhost:8554/ds-test
  #3 displaysink
  sink-type: 3
  #encoder type 0=Hardware 1=Software
  enc-type: 0

config_preprocess.txt is
config_preprocess.txt (3.3 KB)
.

Realized that preprocess was not added into pipeline in main.
So preprocess was added using gst_element_factory_make

preprocess = gst_element_factory_make("nvdspreprocess", "preprocess-plugin");
nvds_parse_preprocess(preprocess, argv[1], "secondary-preprocess");
gst_bin_add_many(GST_BIN(pipeline), pgie, tracker, preprocess, nvtile,
    nvvidconv, nvosd, sink, nvdslogger, NULL);

  // Link elements
  if (!gst_element_link_many(streammux, pgie, tracker, preprocess, nvdslogger, nvtile, nvvidconv, nvosd,  sink, NULL)) {
    g_printerr ("Elements could not be linked. Exiting.\n");
    return -1;
  }

But I have what(): bad_function_call, when the app runs.
The whole errors are as follows.

WARNING: Overriding infer-config batch-size (0) with number of sources (20)
sink_type:3, enc_type:0
Now playing!
terminate called after throwing an instance of 'std::bad_function_call'
  what():  bad_function_call
Aborted (core dumped)

The whole main() loop is as follows.

int main(int argc, char *argv[])
{
  guint num_sources = 0;

  GMainLoop *loop = NULL;
  GstCaps *caps = NULL;
  GstElement *streammux = NULL, *pgie = NULL, *preprocess = NULL;
  GstElement *nvvidconv = NULL, *nvtile = NULL, *nvosd = NULL, *tracker = NULL, *nvdslogger = NULL;
  GstElement *sink = NULL;
  DsSourceBinStruct source_struct[128];
  GstBus *bus = NULL;
  guint bus_watch_id;

  gboolean useDisplay = FALSE;
  gboolean useFakeSink = FALSE;
  gboolean useFileSink = FALSE;
  guint tiler_rows, tiler_columns;
  GstPad *sinkpad, *srcpad;
  gchar pad_name_sink[16] = "sink_0";
  gchar pad_name_src[16] = "src";

  bool isStreaming=false;
  GList* g_list = NULL;
  GList* iterator = NULL;
  bool isH264 = true;
  gchar *filepath = NULL;


  int current_device = -1;
  cudaGetDevice(&current_device);
  struct cudaDeviceProp prop;
  cudaGetDeviceProperties(&prop, current_device);

  /* Standard GStreamer initialization */
  // signal(SIGINT, sigintHandler);
  gst_init(&argc, &argv);
  loop = g_main_loop_new(NULL, FALSE);

  _intr_setup ();
  g_timeout_add (400, check_for_interrupt, NULL);

  /* Create gstreamer elements */
  /* Create Pipeline element that will form a connection of other elements */
  pipeline = gst_pipeline_new("deepstream_pose_classfication_app");
  if (!pipeline) {
    g_printerr ("Pipeline could not be created. Exiting.\n");
    return -1;
  }

  /* we add a message handler */
  bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));
  bus_watch_id = gst_bus_add_watch(bus, bus_call, loop);
  gst_object_unref(bus);

  /* Create nvstreammux instance to form batches from one or more sources. */
  streammux = gst_element_factory_make ("nvstreammux", "streammux-pgie");
  if (!streammux) {
    g_printerr ("PGIE streammux could not be created. Exiting.\n");
    return -1;
  }
  gst_bin_add(GST_BIN(pipeline), streammux);
  parse_streammux_width_height_yaml(&_image_width, &_image_height, argv[1]);
  g_print("width %d hight %d\n", _image_width, _image_height);

  if (NVDS_YAML_PARSER_SUCCESS != nvds_parse_source_list(&g_list, argv[1], "source-list")) {
    g_printerr ("No source is found. Exiting.\n");
    return -1;
  }

  for (iterator = g_list, num_sources=0; iterator; iterator = iterator->next,num_sources++) {
    /* Source element for reading from the file */
    source_struct[num_sources].index = num_sources;

    if (g_strrstr ((gchar *)iterator->data, "rtsp://") ||
        g_strrstr ((gchar *)iterator->data, "v4l2://") ||
        g_strrstr ((gchar *)iterator->data, "http://") ||
        g_strrstr ((gchar *)iterator->data, "rtmp://")) {
      isStreaming = true;
    } else {
      isStreaming = false;
    }

    g_print("video %s\n", (gchar *)iterator->data);

    if (!create_source_bin (&(source_struct[num_sources]), (gchar *)iterator->data))
    {
      g_printerr ("Source bin could not be created. Exiting.\n");
      return -1;
    }
      
    gst_bin_add (GST_BIN (pipeline), source_struct[num_sources].source_bin);
      
    g_snprintf (pad_name_sink, 64, "sink_%d", num_sources);
    sinkpad = gst_element_get_request_pad (streammux, pad_name_sink);
    if (!sinkpad) {
      g_printerr ("Streammux request sink pad failed. Exiting.\n");
      return -1;
    }

    srcpad = gst_element_get_static_pad (source_struct[num_sources].source_bin,
        pad_name_src);
    if (!srcpad) {
      g_printerr ("Decoder request src pad failed. Exiting.\n");
      return -1;
    }
    GstPadLinkReturn ret = gst_pad_link (srcpad, sinkpad);
    if ( ret != GST_PAD_LINK_OK) {
      g_printerr ("Failed to link decoder to stream muxer. Exiting. %d\n",ret);
      return -1;
    }
    gst_object_unref (sinkpad);
    gst_object_unref (srcpad);
  }

  nvds_parse_streammux(streammux, argv[1], "streammux");

  if (isStreaming)
    g_object_set (G_OBJECT (streammux), "live-source", true, NULL);
  g_object_set (G_OBJECT (streammux), "batch-size", num_sources, NULL);

  /* Use nvinfer to run inferencing on decoder's output,
   * behaviour of inferencing is set through config file */
  NvDsGieType pgie_type = NVDS_GIE_PLUGIN_INFER;
  RETURN_ON_PARSER_ERROR(nvds_parse_gie_type(&pgie_type, argv[1], "primary-gie"));
  if (pgie_type == NVDS_GIE_PLUGIN_INFER_SERVER) {
      pgie = gst_element_factory_make("nvinferserver", "primary-nvinference-engine");
  } else {
      pgie = gst_element_factory_make("nvinfer", "primary-nvinference-engine");
  }
  if (!pgie) {
    g_printerr ("PGIE element could not be created. Exiting.\n");
    return -1;
  }
  nvds_parse_gie (pgie, argv[1], "primary-gie");
  /* preprocess */
  preprocess = gst_element_factory_make("nvdspreprocess", "preprocess-plugin");
  nvds_parse_preprocess(preprocess, argv[1], "secondary-preprocess");
  /* Override the batch-size set in the config file with the number of sources. */
  guint pgie_batch_size = 0;
  g_object_get(G_OBJECT(pgie), "batch-size", &pgie_batch_size, NULL);
  if (pgie_batch_size != num_sources) {
    g_printerr
        ("WARNING: Overriding infer-config batch-size (%d) with number of sources (%d)\n",
        pgie_batch_size, num_sources);

    g_object_set(G_OBJECT(pgie), "batch-size", num_sources, NULL);
  }

  //---Set pgie properties---

  /* We need to have a tracker to track the identified objects */
  tracker = gst_element_factory_make ("nvtracker", "tracker");
  if (!tracker) {
    g_printerr ("Nvtracker could not be created. Exiting.\n");
    return -1;
  }
  nvds_parse_tracker(tracker, argv[1], "tracker");

  nvdslogger = gst_element_factory_make ("nvdslogger", "nvdslogger");
  if (!nvdslogger) {
      g_printerr ("Nvdslogger could not be created. Exiting.\n");
      return -1;
  }
  g_object_set (G_OBJECT(nvdslogger), "fps-measurement-interval-sec",
        1, NULL);

  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the sink pad of the osd element, since by that time, the buffer would have
   * had got all the metadata. */
  GstPad* pgie_src_pad = gst_element_get_static_pad(tracker, "src");
  if (!pgie_src_pad)
    g_printerr ("Unable to get src pad for pgie\n");
  else
    gst_pad_add_probe(pgie_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
        pgie_src_pad_buffer_probe, NULL, NULL);
  gst_object_unref (pgie_src_pad);

  /* Use convertor to convert from NV12 to RGBA as required by nvosd */
  nvvidconv = gst_element_factory_make("nvvideoconvert", "nvvideo-converter");
  if (!nvvidconv) {
    g_printerr ("nvvidconv could not be created. Exiting.\n");
    return -1;
  }
  gchar *string1 = NULL;
  asprintf (&string1, "video/x-raw(memory:NVMM),width=%d,height=%d",
      _image_width, _image_height); 
  //---Manipulate image size so that PGIE bbox is large enough---

  /* Create OSD to draw on the converted RGBA buffer */
  nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");
  if (!nvosd) {
    g_printerr ("Nvdsosd could not be created. Exiting.\n");
    return -1;
  }
  nvtile = gst_element_factory_make ("nvmultistreamtiler", "nvtiler");
  tiler_rows = (guint) sqrt (num_sources);
  tiler_columns = (guint) ceil (1.0 * num_sources / tiler_rows);
  g_object_set (G_OBJECT (nvtile), "rows", tiler_rows, "columns",
      tiler_columns, "width", 1280, "height", 720, NULL);

  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the sink pad of the osd element, since by that time, the buffer would have
   * had got all the metadata. */
  GstPad* osd_sink_pad = gst_element_get_static_pad(nvosd, "sink");
  if (!osd_sink_pad)
    g_print("Unable to get sink pad\n");
  else
    gst_pad_add_probe(osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
                      osd_sink_pad_buffer_probe, NULL, NULL);
  gst_object_unref(osd_sink_pad);

  /* Set output file location */
  int sink_type = 0;
  parse_sink_type_yaml(&sink_type, argv[1]);
  int enc_type = 0;
  parse_sink_enc_type_yaml(&enc_type, argv[1]);
  g_print("sink_type:%d, enc_type:%d\n", sink_type, enc_type);

  if(sink_type == 1) {
    sink = gst_element_factory_make("nvvideoencfilesinkbin", "nv-filesink");
    if (!sink) {
      g_printerr ("Filesink could not be created. Exiting.\n");
      return -1;
    }
    g_object_set(G_OBJECT(sink), "output-file", "out.mp4", NULL);
    g_object_set(G_OBJECT(sink), "bitrate", 4000000, NULL);
    //g_object_set(G_OBJECT(sink), "profile", 3, NULL);
    g_object_set(G_OBJECT(sink), "codec", 1, NULL);//hevc
    // g_object_set(G_OBJECT(sink), "control-rate", 0, NULL);//hevc
    g_object_set(G_OBJECT(sink), "enc-type", enc_type, NULL);
  } else if(sink_type == 2) {
    sink = gst_element_factory_make("nvrtspoutsinkbin", "nv-rtspsink");
    if (!sink) {
      g_printerr ("Filesink could not be created. Exiting.\n");
      return -1;
    }
    g_object_set(G_OBJECT(sink), "enc-type", enc_type, NULL);
  } else if(sink_type == 3) {
    if (prop.integrated) {
      sink = gst_element_factory_make("nv3dsink", "nv-sink");
    } else {
#ifdef __aarch64__
      sink = gst_element_factory_make("nv3dsink", "nv-sink");
#else
      sink = gst_element_factory_make("nveglglessink", "nv-sink");
#endif
    }
  } else {
    sink = gst_element_factory_make("fakesink", "nv-fakesink");
  }

  /* Add all elements to the pipeline */
  // streammux has been added into pipeline already.
  gst_bin_add_many(GST_BIN(pipeline), pgie, tracker, preprocess, nvtile,
    nvvidconv, nvosd, sink, nvdslogger, NULL);

  // Link elements
  if (!gst_element_link_many(streammux, pgie, tracker, preprocess, nvdslogger, nvtile, nvvidconv, nvosd,  sink, NULL)) {
    g_printerr ("Elements could not be linked. Exiting.\n");
    return -1;
  }

  /* Set the pipeline to "playing" state */
  g_print("Now playing!\n");
  gst_element_set_state(pipeline, GST_STATE_PLAYING);
  GST_DEBUG_BIN_TO_DOT_FILE((GstBin*)pipeline, GST_DEBUG_GRAPH_SHOW_ALL, "pipeline");

  /* Wait till pipeline encounters an error or EOS */
  g_print("Running...\n");
  g_main_loop_run(loop);

  /* Out of the main loop, clean up nicely */
  g_print("Returned, stopping playback\n");
  gst_element_set_state(pipeline, GST_STATE_NULL);
  g_print("Deleting pipeline\n");
  gst_object_unref(GST_OBJECT(pipeline));
  g_source_remove(bus_watch_id);
  g_main_loop_unref(loop);

  return 0;

}

I think the correct pipeline should be

nvstreammux --> preprocess -> nvinfer ....

So, Modify the code like this.

gst_element_link_many(streammux, preprocess, pgie, tracker, ...

Preprocess is for sgie. But sgie is not implemented yet. I’m exploring into preprocess first. Need to understand default preprocess so that I can manage detected objects for sgie.

You can refer to deepstream-app for usage of secondary-pre-process

The following configuration files can be referenced
source4_1080p_dec_preprocess_infer-resnet_preprocess_sgie_tiled_display_int8.txt

Sure ok. I’ll run the program and try to understand. Thanks a lot for suggesting

Now I can understand how preprocessing is implemented in deepstream-app.
But my preprocessing requirement is a bit different.
The model is spatial-temporal model. So that I need to collect 40 cropped objects for each object ID.
Once a vector is filled with 40 cropped objects of same ID, that vector will be sent for SGIE. SGIE needs to be implemented in Triton server since my model is pytorch model.
I am thinking to use vector of NvDsRoiMeta. scale_and_fill_data crop objects. But can’t find where is cropped object put into NvDsRoiMeta.
Once the vector size reaches to 40, then the vector will be sent to SGIE.

Things I need help are
(1)Is the idea correct to use a vector of NvDsRoiMeta to store cropped objects with same object ID.
(2)How can I send to SGIE for inference once the vector size reached to 40?

Can the model be exported to ONNX format ? This way you can use nvinfer only in the pipeline without adding nvinferserver

This is only used to record ROI(region of interest) and cannot be used to store detected objects.

Try to add network-input-shape to the configuration file. You can refer to the role of nvdspreprocess->max_batch_size in nvdspreprocess_property_parser.cpp

Thanks for the reply.
But I didn’t ask question clearly.
My questions are
(1)How can I crop objects in preprocessing cpp file inside process on objects (not on frame) function?
(2)I am going to use a vector for each object ID to get 40 cropped images of that particular object. Once the vector is full, I need to send to SGIE. How can I send?

Why do you need to crop objects manually? If you set the process-on-frame property to 0, dsnvpreprocess will handle it for you.

Set the value of operate-on-class-ids to the object id you want to crop, and then configure network-input-shape to 40;xx;xx;xx.

If I am not mistaken, your SGIE input is 40 tensors processed by nvdspreprocess.

Thanks a lot. It looks quite straight forward. I’ll try. Thank you. Then I’ll implement model on Triton. According to documentation, inference on Triton server can be designed to be faster than nvinfer. Is it true?

This is not true, nvinferserver is based on triton server, and one of triton’s backends is tensorrt, the same as nvinfer

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.