How is the sgie input format determined?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) both GPU/Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I think it was covered a lot in previous issues, but face-recognition usually involves the following pipeline:

pgie(face-detector, network-type=other, output-tensor-meta=1) -> tracker -> **sgie (network-type=100)**

Assuming that align is done in pgie_src_pad_probe, can it be said that sgie’s default input is determined by obj_meta (meta called by below) when there is no additional setting? (Regardless of sgie network-type)

  • Secondary mode: Operates on objects added in the meta by upstream components

face-detector is a detection model, please set network-type to 0. nvinfer low level will attach object meta. if network-type is other, output-tensor-meta is 1, nvinfer low level will not add meta. you need to process model inference outputs in probe function.

Yes, that part is being processed as you said.
This is because landmark information and bbox information come in together, so they are not processed as the type provided by nvidia by default. In other words, I received it as a tensor (network-type=100), separated it, and then aligned it after nms and tried to add_obj_meta.

But what I’m curious about is this processing logic, so if sgie is not an input_tensor_meta, it can be said that the obj_meta of the obj_meta_list is transmitted, right?

what do you mean about this? please refer to a complete sample deepstream-emotion-app.
the pipeline is …-> pgie(detection) ->sgie1(faciallandmark, network-type=100)-> sgie2(not use nvinfer).

Thank you for quick response
However, I would like to pay more attention when going from pgie to sgie when pgie’s network-type=100

so if sgie is not an input_tensor_meta, it can be said that the obj_meta of the obj_meta_list is transmitted, right?

input_tensor_meta means that pgie -> preprocess -> sgie. I mean of course I didn’t use this pipeline

So, what I was curious about was whether the sgie input was each meta in obj_meta_list when going from pgie->sgie.

if input_tensor_meta is 1, nvinfer plugin will not use object meta, will use batch’s user meta batch_meta->batch_user_meta_list, which is added by function nvds_add_user_meta_to_batch in nvpreprocess plugin.
nvinfer plugin is opensource. please refer to function gst_nvinfer_process_tensor_input.
nvpeprocess plugin is opensource. please refer to function attach_user_meta_at_batch_level.

Hmm… I think you read the answer wrong. input_tensor_meta is not used at all (I mentioned that I mean of course I didn’t use this pipeline ), and the context of the question is slightly different from the current answer.

if input_tensor_meta is not set to 1, sgie’s nvinfer will process each object in frame_meta->obj_meta_list. please refer to gst_nvinfer_process_objects in nvinfer plugin.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.