raw tensor output of primary_infer lose because of using interval property

hello,

I have modified the “interval” property in the configuration of “deepstream-app” for the primary_infer, and also changed the pipeline like this (…nvstreammux->primary_infer->secondary_infer0->secondary_infer1->fakesink), but the pobe in the sinkpad of fakesink cannot find the tensor meta of the primary_infer. By the way, I turned on the output-tensor-meta for all the nvinfer plugins which are processing on full-frame (process-mode=1).
any suggestions? why I lost the primary_infer tensor meta?
you can check the attachment for the pipeline below

process-mode for sgie is 2, you can refer to docomentation
•Secondary mode: Operates on objects added in the meta by upstream components

also you can see sample code sources/apps/sample_apps/deepstream-infer-tensor-meta-test/ which demonstrate “creates multiple instances of “nvinfer” element. Each instance of the “nvinfer” uses TensorRT API to infer on frames/objects. Every instance is configured through its respective config file and enable
each instance to generate raw tensor data on full frame or object.”

In my case, the pipeline presenting in the attachment is different from the example of “deepstream-infer-tensor-meta-test”.Of cource some features of both are the same. But, the point is that the raw output tensor meta for primary_infer is lost in the frame_user_meta_list, when the property of “interval” is used in the primary_infer. By the way, my pipeline contains three nvinfer plugins in line which are all the classifier model and process full frame (it is means process-mode should be 1).
all in one, I want to solve the problem --“the lost tensor meta” when using “interval” property.

but you have 2 sgie, this should be 2 for process-mode, is it possible you can share sample code for further
check?

hi, amycao:
I have three nvinfer plugins and they are all claasifier model, which will do infer seperately but in sequence, so I set the process-mode=1 for the 2 sgies.
I do test a simple pipeline with 3 primary classifier nvinfer plugins, setting process-mode = 1 and output-tensor-meta=1. the result is that the interval property is valid for each nvinfer plugins. Maybe I am misunderstand the “interval” and “process-mode” property and the definition of primary and secondary mode, so the result is wrong in my project based on the “deepstream-app”.
The sample code and configurations are in the attachment.
All in one, my target is trying to skip an assigned number of frames for nvinfer plugins during infering.
deepstream-infer-tensor-meta-test.tar.gz (4.19 KB)

HI
I run with your code failed, Config file path: dstensor_sgie3_config.txt, NvDsInfer Error: NVDSINFER_CUDA_ERROR
something missing?
I look through your code and config files, seems you use same sgie models within deepstream package, so i wonder
why you do custom like this, we need primary gie for detection, the inferred results downstreamed to tee, which sgie can connect to infer with, we have plugin manual for nvinfer, NVIDIA Metropolis Documentation

hi, amycao
The sample is based on the “deepstream-infer-tensor-meta-test” and runs all right in my environment (GPU:v100, docker:nvcr.io/nvidia/deepstream 4.0.1-19.09-devel).
Actually, I do this pipeline because it will be used to review the stream for live-broadcasting, which may contains violence, porn and other scenes. So, the pipeline has several classifier models seperately to check whether the stream offends the rules of live-broadcasting.

HI
I can run success with your code on T4, seems i do not understand your issue quite clearly?
Here is test experiment, based on your source, just have 1 primary gie, with or without interval
property set to 1 or other integer, tensor data both can be generated,
interval set to 1
All_infer_index:0 class0:0.072274class1:0.230399class2:0.120353class3:0.120259class4:0.200309class5:0.387021class6:0.124313class7:0.163995class8:0.281056class9:0.198988class10:0.206491class11:0.231149class12:0.173731class13:0.206773class14:0.172936class15:0.234138
interval set to 0
All_infer_index:0 class0:0.412053class1:0.054210class2:0.072218class3:0.032050class4:0.007072class5:0.004062class6:0.095556class7:0.138709class8:0.154919class9:0.056768class10:0.014820class11:0.126533class12:-0.016039class13:-0.001077class14:0.000878class15:0.015592
Can you show detailed your issue?

Hi, amycao
thank you for your replay.
I did receive the tensor data, and the focus of my issue lies on 2 conceptions: " process-mode" and "interval.
I read the detail of nvinfer plugin (https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream_Plugin_Manual%2Fdeepstream_plugin_details.02.01.html%23wwpID0E0GX0HA), and find that: 1. the primary-mode can be used on both detector, classifier, primary and secondary, and there can be several primary-mode in a pipeline; 2. the “interval” property can only use on the primary-mode and it will take effect on the fllowing secondary-modes which are based on the primary-mode. Am I right?
In fact, I am more interesting on the 2.


yes you are right.