Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) AGX • DeepStream Version 5.0 • JetPack Version (valid for Jetson only) 4.4 • TensorRT Version 8.0 • NVIDIA GPU Driver Version (valid for GPU only) 10.2 • Issue Type( questions, new requirements, bugs) questions • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hello, I’m trying to detect the video, but the detection performance is not good, so I thought of drawing a box with a pgie for one frame and a tracker for one frame. (tracker = nvDCF, IoU, KLT)
If I set the pgie interval to 0, isn’t it correct that the tracker simply manages the object? I wonder if the tracker helps and affects the detector when drawing the box with interval=0. My goal is to draw a box using the tracker’s state estimation (eg. mov, kf)
I don’t understand what you mean. Tracker is to tracking the object detected by pgie. Sometimes detector missed one or two frames but the tracker can keep tracking the object due to the appropriate settings of the tracker algorithm. Please refer to the document.
Oh, for example, if pgie detect a detector, pgie cannot cross(unexceed) the tracker confidence, or if pgie is in a probationary state, osd cannot hit the box. Is it right?
pgie detect a object in one frame, it provide bbox to the tracker, the tracker will take the bbox in this frame and judge if this object is the same object in the last frame by the tracking algorithm. So the tracker rely on the bboxes provided by pgie and the tracking algorithm to make decicion. PGIE(detector) only cares about the current frame, there is no probationary state for pgie.
Thank you for your answer.
I think the meaning of the probational state I mentioned is wrong, but the probational state refers to the interval period when interval > 0. When I looked at the sdk document, it said to skip detect (pgie inference, so I saw the fps go up) when interval > 0, but I thought the box entering tracker input (when interval > 0) might be the result of the previous frame! is it right??
frame 0 : pgie detect → tracking → bboxes
frame 1 : (frame1 + frame0 bboxes) → tracking → bboxes
frame 2 : (frame2 + frame1 bboxes) → tracking → bboxes
frame 3 : pgie detect → tracking → bboxes
is it right?
Fundamentally, I was wondering what information tracker would use to compare with the track queue during the interval period. Thank you for your fast response
Not exactly, tracking algorithm is complicated and configurable. It does not depend on nvinfer, it only uses bbox as input, tacker knows nothing about nvinfer interval.
Hmm, I’ll read the document,
but…Doesn’t the tracker depend on the detect result?
Doesn’t box as input depend on the recent detection box? Then, where did you get the bbox comparing it with the tracker queue you mentioned?
Thank you!
Yes, the tracker depend on the detect result. And the tracker algorithm can predict the future bbox based on the previous bbox and the picture of the object. So it is more precise to say the tracker depends on the previous objects.
Okay, thank you for your response
So, even if the thing in the tracker active queue is not in the box that is currently input, is it globally searching the entire image with DCF filter?
First of all, I decided to use the tracker because I thought it would be possible to compensate for the phenomenon of the bbox flicker in detection and expect fast fps
If interval > 0, only input frame batch will be input,
so isn’t it right to not do data association separately? So, isn’t it right to create a box by comparing the target queue with the input frame batch according to each unified tracker? (ie. nvdcf => IoU & visual)