I have modified the “interval” property in the configuration of “deepstream-app” for the primary_infer, and also changed the pipeline like this (…nvstreammux->primary_infer->secondary_infer0->secondary_infer1->fakesink), but the pobe in the sinkpad of fakesink cannot find the tensor meta of the primary_infer. By the way, I turned on the output-tensor-meta for all the nvinfer plugins which are processing on full-frame (process-mode=1).
any suggestions? why I lost the primary_infer tensor meta?
you can check the attachment for the pipeline below
also you can see sample code sources/apps/sample_apps/deepstream-infer-tensor-meta-test/ which demonstrate “creates multiple instances of “nvinfer” element. Each instance of the “nvinfer” uses TensorRT API to infer on frames/objects. Every instance is configured through its respective config file and enable
each instance to generate raw tensor data on full frame or object.”
In my case, the pipeline presenting in the attachment is different from the example of “deepstream-infer-tensor-meta-test”.Of cource some features of both are the same. But, the point is that the raw output tensor meta for primary_infer is lost in the frame_user_meta_list, when the property of “interval” is used in the primary_infer. By the way, my pipeline contains three nvinfer plugins in line which are all the classifier model and process full frame (it is means process-mode should be 1).
all in one, I want to solve the problem --“the lost tensor meta” when using “interval” property.
I have three nvinfer plugins and they are all claasifier model, which will do infer seperately but in sequence, so I set the process-mode=1 for the 2 sgies.
I do test a simple pipeline with 3 primary classifier nvinfer plugins, setting process-mode = 1 and output-tensor-meta=1. the result is that the interval property is valid for each nvinfer plugins. Maybe I am misunderstand the “interval” and “process-mode” property and the definition of primary and secondary mode, so the result is wrong in my project based on the “deepstream-app”.
The sample code and configurations are in the attachment.
All in one, my target is trying to skip an assigned number of frames for nvinfer plugins during infering. deepstream-infer-tensor-meta-test.tar.gz (4.19 KB)
The sample is based on the “deepstream-infer-tensor-meta-test” and runs all right in my environment (GPU:v100, docker:nvcr.io/nvidia/deepstream 4.0.1-19.09-devel).
Actually, I do this pipeline because it will be used to review the stream for live-broadcasting, which may contains violence, porn and other scenes. So, the pipeline has several classifier models seperately to check whether the stream offends the rules of live-broadcasting.
I can run success with your code on T4, seems i do not understand your issue quite clearly?
Here is test experiment, based on your source, just have 1 primary gie, with or without interval
property set to 1 or other integer, tensor data both can be generated,
interval set to 1
interval set to 0
Can you show detailed your issue?