Optical flow + tracker with deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson ORIN and dGPU
• DeepStream Version 7.0
• Issue Type( questions, new requirements, bugs) Question

Hi, I would like to know if there is currently any mechanism to use the optical flow vectors from the nvof to improve the tracking with the nvtracker plugin. If not, is it something that could be implemented later ? Or is there other tools that i could use to improve tracking with optical flow ?

It could be usefull as sometimes, tracking small object on textured background can be very challenging for classical detector + tracker.

Thank you

Here is the document for nvtracker: Gst-nvtracker — DeepStream documentation
Can you share your use case or videos for the very challenging tracking use case?

For example, for drone tracking, my model is very good when the drone is in the sky, but struggle a lot when there is a textured background

Please use the latest DeepStream version 7.1. Which nvtracker configure are you using? Are you using: /opt/nvidia/deepstream/deepstream-7.1/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml? Can the detector detect the drone when there is a textured background?

Here is the same video, without the tacker, the detector does struggle to detect the drone on the vegetation, that is why i was wondering if the optical flow could help me :


I was not using the accuracy.yml config file but a custom one, here is the result in deepstream7.1 and config_tracker_NvDCF_accuracy.ym:

Basically, nvtracker is depend on the detector. You can have a check if shadow tracking target works in your use case with config_tracker_NvDCF_accuracy.yml: Gst-nvtracker — DeepStream documentation

Thank you for the reply, so to conclude,

  • There is currently no mechanism to easily integrate optical flow data in the nvTracker
  • The performance of the tracker greatly depend on the performance of the detector
  • Shadow tracking data is not is not being reported to the downstream by default but can be retrieved by user

So i guess in my case it will be mostly trying to train a model that is more robust to textured background

Yes, you are right. You can implement a custom low level tracker library based on your algrithm based on this guidance : Gst-nvtracker — DeepStream documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.