In Deepstream, How can I pass the data to next plugin?

Hi,

I use two inferences in deepstream.

I want to use the data from first inference in second inference.

And there are two plugins each implemente inference.

How can I pass the data of first inference to next plugin (second inference).

Thanks.

Hi,

Could you share more information about your use case?
Is the model is detection + classification.

Deepstream do support the pipeline with detector+classifier inference:
Please check our document for the details:
[url]https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_app_architecture.html[/url]

If you have the similar workflow, just replace the file path in the configure will be enough.
Thanks.

Hi,

I am using custom detector + tracker and pose estimation.

In pose estimation plugin, I should get detector’s data.

How can I pass the data?

Hi,

You can use the operate-on-gie-id and operate-on-class-ids flag.

Take source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt as example:

[primary-gie]
...
gie-unique-id=1
config-file=config_infer_primary.txt


[secondary-gie1]
...
[b]operate-on-gie-id=1
operate-on-class-ids=0;[/b]
config-file=config_infer_secondary_carcolor.txt

The detector, which is declared as primary-gie, use the ID=1.
The classifier of secondary-gie1 will be applied to the ROI from class#0 of engine ID=1.

Thanks.

Hi,

But I am not using deepstream-app with config files.

I make custom gstreamer plugins like dsexample.

And then I make execute program like deepstream-redaction-app.

In this case, How can I do?

Thanks.

Hi,

For dsexample-like application, you can directly control the data flow of GStreamer.
So just make sure you have feed the output of primary-gie into secondary-gie.

Ex. In dsexample, you will need to add a sencondary-gie in DsExampleProcess:

if (dsexample->process_full_frame) {
    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next)
    {
      frame_meta = (NvDsFrameMeta *) (l_frame->data);
      NvOSD_RectParams rect_params;

      // Scale the entire frame to processing resolution
      rect_params.left = 0;
      rect_params.top = 0;
      rect_params.width = dsexample->video_info.width;
      rect_params.height = dsexample->video_info.height;

      // Scale and convert the frame
      if (get_converted_mat (dsexample, surface, i, &rect_params,
            scale_ratio, dsexample->video_info.width,
            dsexample->video_info.height) != GST_FLOW_OK) {
        goto error;
      }

      // Process to get the output
      output =
          DsExampleProcess (dsexample->dsexamplelib_ctx,
          <b>dsexample->cvmat->data</b>);
      // Attach the metadata for the full frame
      attach_metadata_full_frame (dsexample, frame_meta, scale_ratio, output, i);
      i++;
      free (output);
    }

  } else {
    // Using object crops as input to the algorithm. The objects are detected by
    // the primary detector
    NvDsMetaList * l_obj = NULL;
    NvDsObjectMeta *obj_meta = NULL;

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next)
    {
      frame_meta = (NvDsFrameMeta *) (l_frame->data);
      for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
          l_obj = l_obj->next)
      {
        obj_meta = (NvDsObjectMeta *) (l_obj->data);


        /* Should not process on objects smaller than MIN_INPUT_OBJECT_WIDTH x MIN_INPUT_OBJECT_HEIGHT
         * since it will cause hardware scaling issues. */
        if (obj_meta->rect_params.width < MIN_INPUT_OBJECT_WIDTH ||
            obj_meta->rect_params.height < MIN_INPUT_OBJECT_HEIGHT)
          continue;

        // Crop and scale the object
        if (get_converted_mat (dsexample,
              surface, frame_meta->batch_id, &obj_meta->rect_params,
              scale_ratio, dsexample->video_info.width,
              dsexample->video_info.height) != GST_FLOW_OK) {
          // Error in conversion, skip processing on object. */
          continue;
        }

        // Process the object crop to obtain label
        output = DsExampleProcess (dsexample->dsexamplelib_ctx,
            <b>dsexample->cvmat->data</b>);

        // Attach labels for the object
        attach_metadata_object (dsexample, obj_meta, output);

        free (output);
      }
    }
  }

Thanks.