Please provide complete information as applicable to your setup.
• Hardware Platform: (Jetson nano)
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only): 4.5.1
• TensorRT Version: 7.1.3.0
• Issue Type( questions )
Hello.
I’m using Deepstream python bindings to detect and track objects (in my case: people). The detection of the people is not perfect and there are some frames where the object is not detected, and then detected again, therefore no bounding box is shown on screen for those frames. I added a tracker to the pipeline(Nvdcf tracker) and showed the id on the screen. For the above case, even though the object was not detected for couple of frames, the id was the same, which means the object is being tracked. The problem is that for the frames(1-5 frames) where the object was not detected, I couldn’t see the id (and bbox) on the screen.
If the interval is set to 0, no frames are skipped for inference.
If the interval is set to 1 → 1 frame detection and 1 frame tracker. If the interval is set to 10 → 1 frame detection 10 frames tracker. The tracking is very good after detection(for those 10 frames), and bounding boxes are shown on the screen. I couldn’t find a way to set the tracker duration and detection interval separately. If I want detection at every 2 frames and tracking for 7 frames(if the tracker has a confidence > x), I can’t do that. Or can I?
I tried to set minTrackingConfidenceDuringInactive
to a lower value(0.5 or lower) (Min tracking confidence during INACTIVE period. If tracking confidence is higher than this, then tracker will still output results until next detection ), but it didn’t work. Also, when I run the script, this parameter appears deprecated: [NvDCF][Warning] minTrackingConfidenceDuringInactive
is deprecated
tracker_config.yml (6.7 KB)
If the object is being tracked, can I get the coordinates of that object (tracker) for the frames where the object is not detected?