Low object tracking by nvDCF

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Xavier NX
• DeepStream Version: 6.2
• JetPack Version (valid for Jetson only): 5.1.1
• TensorRT Version: 8.5.2
• Issue Type: Why nvtracker is not able to track all objects in the frame after using nvDCF_perf.yml?
• Requirement details: So I have created custom deepstream pipeline. I have 3 models(Yolo v8, Unet, OCR) in the pipeline. Sequence of the pipeline is: streammux - queue1 - YoloV8 - Unet - queue2 -
OCR - tracker - filter1 - queue3 - sink. Probe location is filter1. From the model all object are getting inferred and getting detected correctly. But with tracker not all objects are not getting tracked. Very few objects are gettign tracked. All models are trained on custom dataset and are fp16. This issue also occurs when fast moving objects are in the stream.

Can you have a try with config_tracker_NvDCF_accuracy.yml? Please also check the confidence which output by the detection model. nvtracker config has the threshold to filter those object.

I tried config_tracker_NvDCF_accuracy.yml. I checked the confidence by the output model and it is between 0.6-0.9. Also I add probe after tracker, before tracker, I am printing the confidence in both probes and I found that before tracker all objects are getting detected with respective confidence, but after the tracker only few objects are getting tracked and sometimes not even one object is getting tracked.

Can you try to reproduce your issue with deepstream-app? You can replace your test video firstly, then replace the PGIE.

I ran deepstream-app with the my video and the model and I found out that I am getting the same issue as I was getting in my custom pipeline.

Can you share some video to show the issue? Seems the issue related the video or the detection model.

I have messaged you the videos. 2 videos will be there. One is with tracker is ON and another is with tracker is OFF. I have only used UNET model for this testing. Video is recorded from deepstream-app and not from my pipeline.

The object in your video lasting very short time. Can you have a try to set probationAge to 0 in nvtracker config file? For more details of the parameter, please check: Gst-nvtracker — DeepStream documentation

I have already tried setting probationAge to 0, but the tracking performance remained the same. What parameters should I change or tweak in NvDCF to improve tracking of fast-moving objects?

Can you check if the shadow tracking target data works in your use case?: Gst-nvtracker — DeepStream documentation

I tried changing maxShadowTrackingAge, earlyTerminationAge and saw significant improvement in tracking. I tested this in deepstream-app which is in c++ but my pipeline is in python. I tried same configuration on deepastream python app’s deepstream-test2(h264 input) in this I received less tracking compared to deepstream-app. 40-50% objects did not get tracked in python app. Which python app will you recommend if I want to check the tracker behaviour? Also I have sent you tracker config file.

Suppose the detection and tracking only depend on model and configure file. Python app only setup pipeline and set the config file to the detector and tracker. Python and C++ app should have the same behavior if the pipeline and configure file are same.

Found the resolution. Can close the forum.

Glad to know you found the solution. It would be helpful if you can share the solution.

Passing wrong ll config file path in tracker configuration file.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.