Please provide complete information as applicable to your setup.
• Hardware Platform GPU RTX3060
• DeepStream Version: 6.0.1
• Issue Type: Question
• Python 3
I have a question about using secondary-reinfer-interval.
My Pipeline: uridecodebin > streammux > nvinfer > tracker > nvinferserver > …
- As far as i know, the interval property config is worked only in nvinfer in my pipeline. And there are a tracker element is upstream of nvinferserver, so the nvinferserver will work at Secondary mode.
- From 1, the nvinferserver will doing inference but not all the time (1 infer per 15 frame in default?). So when i change secondary-reinfer-interval property in nvinfer to 0, nvinferserver will doing inference on all next frames (?)
I tried that, but the pipeline seems not going to work as my expected. It doing inference a frame per some seconds, no matter how i change this config to any interger number.
The point is how can i control the nvinferserver doing inference at some specific interval while keeping tracker running inside my pipeline?
And can anyone explain to me the relationship (or they operation) between tracker and nvinferserver when they are together as for this pipeline?
firstly nvinfer is opensource, and nvinfersever is opensource in DeepStream6.2 SDK, you might check the code if interested.
nvinferserver 's default interval is 0, please find DEFAULT_INTERVAL in the nvinferserver code.
secondary-reinfer-interval is only used for sgie, please refer to this topic:secondary-reinfer-interval
please find interval parameter in nvinferserver.
nvtracker is used to track the objects detected by pgie, here nvinferserver is used to do inference on the objects detected by pgie. please refer to the doc: nvtracker
Thanks for your answer. But i have a little concern here.
In nvinferserver docs:
Secondary mode: Operates on objects added in the metadata by upstream components.
When the plugin is operating as a secondary classifier in async mode along with the tracker, it tries to improve performance by avoiding re-inferencing on the same objects in every frame. It does this by caching the classification output in a map with the object’s unique ID as the key. The object is inferred upon only when it is first seen in a frame (based on its object ID) or when the size (bounding box area) of the object increases by 20% or more. This optimization is possible only when the tracker is added as an upstream element. (*)
In this case, nvinferserver not doing inference in all frame right?
If it right, how can i turn off this function? (secondary-reinfer-interval = -1? or async=false?)
And how interval property effect this situation (when turn this * function on and off)?
yes, in sgie mode, nvinferserver will not do inference on all frames.
please refer to GstNvInferServerImpl::processObjects in DeepSream SDK, you can modify shouldInferObject to let do inference on the current frame.
in sgie mode, there is a hardcode interval in nvinferserver, you can find MAX_SECONDARY_REINFER_INTERVAL.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.