• Hardware Platform (Jetson / GPU)
Any • DeepStream Version
6.2
After set minDetectorConfidence and minTrackerConfidence to minimum and increase maxShadowTrackingAge, for some reason even if the same track ID is later re-assigned I can’t get the bounding box estimations if on a certain frame there’s no detection results from inference.
I would expect a tracker that includes Kalman Filter to keep outputting the estimations of the bounding box position?
Regarding NvDsObjectMeta, tried to extract the bbox values from rect_params and from tracker_bbox_info fields. The issue happens with both.
Can you use the latest DeepStream 6.3? Which tracker are you used? NvDCF?
Correct I’m referring to the NvDCF tracker.
Unfortunately we finished the migration to 6.2 just some months ago, as we handle a lot of platforms & gateways the effort of upgrading DeepStream is considerable so our solution lags a bit behind the latest version.
But I’m interested in know if there was some bug related with this issue that was fixed in DeepStream 6.3?
This is the object tracking data generated in the past frames but not reported as output yet. This can be the case when the low-level tracker stores the object tracking data generated in the past frames only internally because of, say, low tracking confidence or being tracked in Shadow Tracking mode, but later decided to report due to increased confidence or re-association with detections. The user can define other types of miscellaneous data in NvMOTTrackerMiscData .
Whenever there’s missed detections, the object is still being tracked in the Shadow Tracking mode, and the tracked data is not reported, because the object may actually disappear from the scene. Later once detected again, the tracked data are reported as past-frame data.
Correct, if we set an inference interval higher than 1 the estimated bounding box position will be present in the object’s metadata for the video frames where DeepStream doesn’t performs the inference (although in all those frames the tracker confidence is set as -10%).
But that doesn’t solves the flickering caused by the lack of detection and corresponding track association for the frames where the inference is expected to happen.
If there’s no association between an existing track and a detected bounding box, the previously tracked object immediately stops being present in the NVIDIA objects metadata (even if internally the tracker keeps storing that information and the track ID is later reassigned).
You can see here the output of the same clip when using 5 frames interval for inference:
We have other GST nodes in the pipeline that rely on the tracker (by copying and maintaining an internal state of the tracked objects), where that flickering is problematic.
Unfortunately the estimation of bounding box position for lack of inference detection is currently not solved by the NVIDIA tracker and the problem is propagated to downstream nodes.
For example when we set a trigger to store a clip if an object enters or leaves a ROI, that can lead to a lot of false-positive triggers.
Please consider discuss internally the possibility of exposing in real-time the “shadow tracking mode” estimations, for example by letting users enable it with a tracker configuration variable.
Even if I set a really low threshold value for tracker confidence, as soon there’s a missing detection the track confidence is set negative and enters in shadow mode.
When there’s just some sporadically missing association for a tracked object, I would expect the tracker to solve this problem by outputting the internal state estimation, without the tracked object intermittently disappearing from the objects metadata.