Cannot obtain Classifier raw tensor output or classifier meta

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson AGX Orin
• DeepStream Version: 7.0
• JetPack Version: (6.0+b106 and 6.0+b87 both are installed) L4T 36.3.0
• TensorRT Version: 8.6.2

i am working on following kind of pipeline
face detection(generates bbox and kps) → face recognition(generates 512 embedding values) → face swap(generates swapped face image)

now my face swap model needs 2 input one is image and other is 1x512 dimensional embeddings from the face recognition model so i am using 2 custom preprocess libs for each input layer and everything is working fine for static embeddings that i added manually,

but when i am trying to access the embeddings from the recognition model in the preprocess lib for embedding input layer i am not able to get them.

i tried both ways 1. By attaching the raw tensor data as meta using the output tensor-meta=1
2. using the obj_meta->classifier_meta_list

but it is showing NULL in both cases and i am not able to understand why

please help me with this

these are the files i am using and the lib code that i modified;

main_app_config.txt (4.1 KB)
recog_config.txt (479 Bytes)
scrfd_config.txt (878 Bytes)
secondary_preprocess_nonimg.txt (2.5 KB)
secondary_preprocess.txt (2.5 KB)

NvDsPreProcessStatus CustomTensorPreparation(
CustomCtx *ctx, NvDsPreProcessBatch *batch, NvDsPreProcessCustomBuf *&buf,
CustomTensorParams &tensorParam, NvDsPreProcessAcquirer *acquirer)
{
 // Ensure the folder exists
 NvDsPreProcessStatus status = NVDSPREPROCESS_TENSOR_NOT_READY; 
//make return status NOT ready

  buf = acquirer->acquire(); // Acquire buffer from tensor pool

  if (!buf){ std::cerr << "Error: Failed to acquire buffer from tensor pool." << 
   std::endl; 
      return status; }

  // initializing all pointers and variables
  GstBuffer *inbuf = batch->inbuf;
 NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(inbuf);
 NvDsMetaList *l_frame = nullptr;


 for (l_frame = batch_meta->frame_meta_list; l_frame != nullptr; l_frame = 
    l_frame->next) { // Iterate over frames in batch
       NvDsFrameMeta *frame_meta = reinterpret_cast<NvDsFrameMeta *>. 
    (l_frame->data);
       NvDsMetaList *l_obj = nullptr;



 for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) {
        NvDsObjectMeta *obj_meta = (NvDsObjectMeta *) (l_obj->data);

        if (!obj_meta) continue;

        if (!obj_meta->classifier_meta_list){std::cout<<"no recog meta"<<std::endl;}// continue;
        if (!obj_meta->obj_user_meta_list){std::cout<<"no user meta"<<std::endl;}// continue;

        // Access classifier metadata
        for(NvDsMetaList *l_classifier = obj_meta->classifier_meta_list; l_classifier != NULL; l_classifier = l_classifier->next) {
            NvDsClassifierMeta *classifier_meta = (NvDsClassifierMeta *)l_classifier->data;
            // if(classifier_meta->unique_component_id!=3) {std::cout<<"no recog meta";}//continue;
                // Access label information
                for (NvDsMetaList *l_label = classifier_meta->label_info_list; l_label != NULL; l_label = l_label->next) {
                    NvDsLabelInfo *label_info = (NvDsLabelInfo *)l_label->data;
                    // Print or store the label information
                    // Determine whether the label is stored in `result_label` or `pResult_label`
                    const gchar *label = nullptr;
                    if (label_info->pResult_label) {
                        // Use dynamically allocated label if available
                        label = label_info->pResult_label;
                    } else {
                        // Otherwise, use the fixed-size result_label
                        label = label_info->result_label;
                    }
              g_print("Label: %s, Confidence: %f, labelid: %d \n",label,label_info->result_prob,label_info->label_id);
                       
                    }
                
 
            }

                    
        
          }
        }




     status = ctx->tensor_impl->syncStream();
    if (status != NVDSPREPROCESS_SUCCESS) {
       std::cerr << "Custom Lib: Cuda Stream Synchronization failed" << std::endl;
       acquirer->release(buf);
        }

  return status;
 }



 NvDsPreProcessStatus
 CustomTransformation(NvBufSurface *in_surf, NvBufSurface *out_surf, 
   CustomTransformParams &params)
 {

   return NVDSPREPROCESS_SUCCESS;
 }

however i checked that the raw tensor output of recog model is accessible in the gie_processing_done_buf_prob function of the deepstream-app.

Are you working with “deepstream-app” sample app?

Is the following list correct?
face detection - PGIE
face recognition - SGIE
face swap - SGIE

yess i am working with “deepstream-app” sample app and Face Recog is SGIE1 face swap is SGIE2

What is this for?

i was trying to send the output of recog model as classifier meta from the custom parser

do you want to see that too?

Do you want to get the tensor meta in gie_processing_done_buf_prob() when setting “output tensor-meta=1” in recog_config.txt?

no no i want that in the preprocess lib for the non img input layer of my swap model

and i want it in any way either as output tensor meta or as classifier meta

Hello, sorry but i am waiting for your help!!

The SGIE’s output tensor can be available in the object meta’s user meta.

The attached files are updated to get the SGIEs output tensor with the sample deepstream_tao_apps/apps/tao_others/deepstream_lpr_app at master · NVIDIA-AI-IOT/deepstream_tao_apps

deepstream_lpr_app.c (33.2 KB)
lpr_app_infer_us_config.yml (1.2 KB)
sgie_lpd_DetectNet2_us.txt (3.2 KB)

The output log shows the tensor output is available.

Frame Number = 317 Vehicle Count = 2 Person Count = 0 License Plate Count = 1
SGIE Obj meta: user meta type tensor output meta
SGIE Obj meta: user meta type tensor output meta
SGIE batch end
Plate License 7SCK505
Frame Number = 318 Vehicle Count = 2 Person Count = 0 License Plate Count = 1
SGIE Obj meta: user meta type tensor output meta
SGIE Obj meta: user meta type tensor output meta
SGIE batch end
Plate License 7SCK505
Frame Number = 319 Vehicle Count = 2 Person Count = 0 License Plate Count = 1
SGIE Obj meta: user meta type tensor output meta
SGIE Obj meta: user meta type tensor output meta
SGIE batch end
Plate License 7SCK505
Frame Number = 320 Vehicle Count = 2 Person Count = 0 License Plate Count = 1
SGIE Obj meta: user meta type tensor output meta
SGIE Obj meta: user meta type tensor output meta
SGIE batch end
Plate License 7SCK505
Frame Number = 321 Vehicle Count = 2 Person Count = 0 License Plate Count = 1
SGIE Obj meta: user meta type tensor output meta
SGIE Obj meta: user meta type tensor output meta
SGIE batch end
Plate License 7SCK505

Please replace the deepstream_tao_apps/apps/tao_others/deepstream_lpr_app/deepstream-lpr-app/deepstream_lpr_app.c, deepstream_tao_apps/configs/app/lpr_app_infer_us_config.yml and deepstream_tao_apps/configs/nvinfer/LPD_us_tao/sgie_lpd_DetectNet2_us.txt with the attached files. Please modify the “source-list” in deepstream_tao_apps/configs/app/lpr_app_infer_us_config.yml to a video which contains car with US license plate to make usre the sample can run.
The command line should be:

./deepstream-lpr-app ../../../../configs/app/lpr_app_infer_us_config.yml

i am sorry but this is not relevant for me as here in this i already told you that i am able to access raw data in the probe function

however i want to access it in the custom preprocess lib cpp file because i want to modify the output and send it to my face swap model which i am not able to do as i am not able to access the data in the preprocess file

You can also get the whole NvDsBatchMeta in the downstream customized nvdspreprocess library if you can get the tensor from the NvDsBatchMeta in the pad probe function.

The tensor data is just a part of the NvDsBatchMeta.
deepstream_tao_apps/apps/tao_others/deepstream-pose-classification/nvdspreprocess_lib/nvdspreprocess_lib.cpp at master · NVIDIA-AI-IOT/deepstream_tao_apps

but that’s what i am saying, i am able to access the meta of PGIE but for SGIE it is coming as NULL.

secondary_preprocess_nonimg_lib.txt (12.3 KB)

see this is the code i am using and as you can see i am trying to access CLASSIFIER_META but i am getting the prints for NULL condition.

Please refer to the sample: deepstream_tao_apps/apps/tao_others/deepstream-pose-classification at master · NVIDIA-AI-IOT/deepstream_tao_apps
In deepstream_tao_apps/apps/tao_others/deepstream-pose-classification/nvdspreprocess_lib/nvdspreprocess_lib.cpp at master · NVIDIA-AI-IOT/deepstream_tao_apps is also the sample for nvdspreprocess after SGIE.

The pipeline is like:

PGIE(person detection)->SGIE1(bosypose3d)->nvdspreprocess(read the SGIE output)->SGIE2(pose_classification)

I’ve added the tensor output read code in the nvdspreprocess library in deepstream_tao_apps/apps/tao_others/deepstream-pose-classification/nvdspreprocess_lib/nvdspreprocess_lib.cpp at master · NVIDIA-AI-IOT/deepstream_tao_apps and got the tensor output from the upstream SGIE1 successfully.

nvdspreprocess_lib.cpp (9.7 KB)

This is the log I got with the modified nvdspreprocess lib

In cb_newpad
###Decodebin pick nvidia decoder plugin.
In cb_newpad
In cb_newpad
###Decodebin pick nvidia decoder plugin.
In cb_newpad
NvMMLiteOpen : Block : BlockType = 4
===== NvVideo: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4

H264: Profile = 66 Level = 0
NVMEDIA: Need to set EMC bandwidth : 363466
NvVideo: bBlitMode is set to TRUE

SGIE Obj meta: user meta type tensor output meta
Tensor layer num: 4
SGIE Obj meta: user meta type tensor output meta
Tensor layer num: 4
SGIE Obj meta: user meta type tensor output meta
Tensor layer num: 4
SGIE Obj meta: user meta type tensor output meta
Tensor layer num: 4
SGIE Obj meta: user meta type tensor output meta
Tensor layer num: 4
SGIE Obj meta: user meta type tensor output meta
Tensor layer num: 4
SGIE Obj meta: user meta type tensor output meta
Tensor layer num: 4
SGIE Obj meta: user meta type tensor output meta
Tensor layer num: 4
SGIE Obj meta: user meta type tensor output meta
Tensor layer num: 4
SGIE Obj meta: user meta type tensor output meta
Tensor layer num: 4
**PERF : FPS_0 (4.00)   FPS_1 (4.00)
SGIE Obj meta: user meta type tensor output meta
Tensor layer num: 4
SGIE Obj meta: user meta type tensor output meta

Thanks for your help but i have one query, you have built following pipeline,

but my pipeline is a bit different:
PGIE → SGIE1 → Preprocess1 → SGIE2
|
v
Preprocess2 → SGIE3

So as you can see i want to access the meta of SGIE1 in Preprocess2, so i tried your suggestion but still couldn’t get the meta so i want to know if that’s possible or not, as i am not sure if they share the same buffers.

Also i want to add one more thing i actually added a print statement as follows
for(int i = 0; i < units; i++) { guint64 object_id = batch->units[i].roi_meta.object_meta->object_id; g_print("Object_ID outside meta loop: %ld\n",object_id); GstBuffer *inbuf = (GstBuffer *)batch->inbuf;
to see on which object my lib is operating and i don’t know why but i am getting even values only like 0,2,4,6,8…and so on

also here are my config files and my custom parser and i created a graph of the pipeline from this and i observed that the secondary_preprocess_nonimg was not included
main_app_config (2).txt (3.5 KB)
secondary_preprocess (1).txt (2.5 KB)
secondary_preprocess_nonimg.txt (2.5 KB)
recog_chroma_new.txt (1.8 KB)

https://drive.google.com/drive/folders/1oN8dmpw7mFgRwSVAb6uhQxTE_qLAaAzU?usp=drive_link
this has my pipeline graph in both formats svg and png you can view whichever suits you

UPDATE: I have observed the code of deepstream_secondary_preprocess.c which is being used to make the pipeline for the sample deepstream-app and in that there is a function

static gboolean should_create_secondary_preprocess (NvDsPreProcessConfig * config_array, guint num_configs, NvDsSecondaryPreProcessBinSubBin * bins, guint index, gint primary_gie_id)
{ NvDsPreProcessConfig *config = &config_array[index]; guint i;
if (!config->enable) { return FALSE; }
if (bins[index].create) { return TRUE; }
if (config->operate_on_gie_id == primary_gie_id) { bins[index].create = TRUE; bins[index].parent_index = -1; return TRUE; }
return FALSE; }

and because of this i am not able to add a preprocess after SGIE, so please giude me how can i do it as i am not that good at c++.

How did you implement the two output from PGIE? With tee?

yess a TEE is being used as i am using the sample deepstream-app i automatically creates a Tee to attach multiple preprocesses on it, please refer to the google drive link shared above to see my full pipeline.

Can you also please tell me that the TEE duplicates the frames on all its src pads or it splits the frames?

Also please tell me about the following

Also i have observed one thing that when i am using my pipeline ;

PGIE->SGIE1
|
→ Preprocess1(for dynamic input to non image layer of swap model)->SWAP
|
→ Preprocess2(for image layer of swap model)->SWAP

for this pipeline (both preprocess a parallel with SGIE1) with single source i am getting output where at some instances face is swapped and at some instance it is just white or black patch whereas it used to work properly with static embeddings when i was providing the embeddings while initialising the model layers.

However if i am duplicating the same source using num-source property in source group and i am using the non-image layer based preprocess on the 2nd source then it is working properly but i am not able to use this num-sources flag for rtsp input source

i also have one more question, in the below defined pipeline

if i am adding the preprocesses my PERF FPS is dropping significantly like it is half of without these preprocess, even though one preprocess has a simple print statement only and other is sending a constant embedding using cudamemcpy in the non img layer, so why is this happening even after using num-source flag

So in total i have asked you following 4 questions:

It is not a necessary to use TEE to branch the SGIE2 and SGIE3. The pipeline PGIE–> SGIE1–>Preprocess1–>SGIE2–>Preprocess2–>SGIE3 can work.

The deepstream-app is open source, you can customize the code to get the function work.

It depends on what you have done in the TEE sink branches.

The code is open source, you can debug with your code. If you find any clue to identify that the issue is caused by DeepStream interfaces, you can report bug in the forum topic.

The code is open source, you can debug with your code. If you find any clue to identify that the issue is caused by DeepStream interfaces, you can report bug in the forum topic.

actually i have solved all the other issues but i am still stuck at this as i am not able to find any reference or sample app that implements this pipeline.

as i am using deepstream-app sample app can you specify where can i make changes to get this pipeline or atleast i can get the output of my face recognition(SGIE2) in preprocess of Swap model, it will be very helpful even if i just get this and let all other preprocess be called parallely before swap.