Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
jetson nano 4GB
• DeepStream Version
6.0.1
• JetPack Version (valid for Jetson only)
4.6.2
• TensorRT Version
8.2.1
• NVIDIA GPU Driver Version (valid for GPU only)
n.a
• Issue Type( questions, new requirements, bugs)
question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
n.a
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
n.a
Is there a way to control how the nvtracker NvDCF/VisualTracker (ColorNames and/or HOG) is being scheduled ?
my observations:
The nvtracker NvDCF can be configured to use feature extraction (ColorNames and/or HOG). These feature extraction leverages the VPI library which in turn perform cuda stream map/sync, etc, …
So far I’ve not found a way/mechanism to control when the gst-nvtracker underlying library (highly multithreaded) should start triggering these calls.
From time to time, these calls do get scheduled during other primary and/or secondary inferences.
Thus driving longer inferences time, reducing determinism, and from to time to time, exceeding the frame interval I’m allowing to perform the required tasks… thus creating a cascade effect of pending inferences.
I know that there is enough time available after or before my PGIE/SGIE inferences (used nsight system to check that). If only I could tell/instruct the nvtracker to perform it’s job outside PGIE/SGIE inferences it would work…
So the question is: Is there such a mechanism available (API/configuration/etc…) ?
Ideal scheduling (nvtracker does not overlap PGIE or SGIE inference)
Problematic scheduling (nvtracker overlaps PGIE or SGIE inference)
Whenever the nvtacker VPI calls do occur while an inference is on-going, the inference execution time gets sensibly impacted (cf first 3 SGIE inferences on bottom graph).
Note: I have no problem adjusting cpp code (gstnvinfer.cpp, nvdsinfer_context_impl.cpp, or other), if necessary.