Optimizing Tracker Configuration in DeepStream Python for Person Detection

• Hardware Platform (Jetson / GPU): dPU A40.
• DeepStream Version: 6.4-triton-multiarch.
• TensorRT Version:
• NVIDIA GPU Driver Version (valid for GPU only): 525.147.05.
• Issue Type( questions, new requirements, bugs): questions.
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I am using DeepStream Python but I am encountering some issues with the tracker config. For my primary inference engine (pgie), I am only detecting people. Initially, I used the config_tracker_NvDCF_accuracy.yml hoping to maintain the tracker ID even when the subject is occluded or when two people cross paths without swapping tracker IDs, but it didn’t completely solve the issue. Instead, it caused another problem where the bounding boxes from pgie were lost, and it seems to display the bounding boxes very slowly. After that, I switched to using config_tracker_NvDCF_max_perf.yml , which made the bounding boxes appear quickly and did not lose the bounding boxes due to the tracker anymore, but it started generating new tracker IDs if a person was occluded and swapped tracker IDs when two people crossed paths. It seems like there is a trade-off here. Can you help me find the best configuration to solve my issues? I am currently testing on 4 videos. Thank you very much.

Yes, config_tracker_NvDCF_accuracy.yml will consume more GPU for tracking. It is a trade-off between accuracy and speed. Can you share any video to show “display the bounding boxes very slowly”?

Sorry the video is private so can I send it to you via private message ?

Sure, you can share with private message.