i am doing some test on NvDCF tracker using DeepStream container running on a server with A6000 dGPU.
As far as i understood from this video, the visual tracking of NvDCF should allow me to localize an object at frame #1 even if there is no output from detector, since the tracker learned a filter using detector’s information at frame #0.
This is super interesting for my use case. I need to detect a person in a video. As far as the person is walking, i detect and track him without problems. At some point during the video, the person lies down on the ground and the detector, obviously, fails.
How can i use NvDCF tracker to perform localization and display the metadata (e.g bounding box) when there is no output from detector? I expect the learned DCF to always output something, even if it would fail on the target. Indeed, this raises another question: how can i recognize the bounding box given by the detector-tracker sinergy and the ones from the tracker only?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)