How to use NVDCF tracker to track objects if moving out and back

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**GPU
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.0.0
• NVIDIA GPU Driver Version (valid for GPU only) GeForce Rtx 2080ti
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) deepstream-app
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi Team,
I am trying to track objects throughout the whole video, say a car-1 moving left to right in FOV till outside of frame and then come back from opposite direction, now I would like this car can be tracked during this video as car-1, even after it goes outside of FOV and still back tracked as car-1.

  1. Can I use nvdcf tracker to realize this case? Or where can I add anything in probe as before or after nvdcf tracker to make it happen?

  2. Also, how can I control the patch of frame to be tracked for each object in the tracker? Since I may need to append the specific frame patch for the tracker to track on.

Your timely help will be really appreciated!

Thanks.

Firstly, you need to choose one proper tracker for your case. Please refer to Frequently Asked Questions — DeepStream 6.3 Release documentation

Probably can. Take NvDCF tracker as the example, you need to know how long (the duration) the object will be back. If the duration is not too long, you can config the NvDCF parameters to make the tracking adapted to your cases. https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_NvDCF_parameter_tuning_guide.html. You may try to set smaller “minTrackerConfidence” and larger “maxShadowTrackingAge”.

What does the patch mean? A picture drawn on the video?

Thanks for your timely reply.

For #1, Yes, I choose NvDCF tracker since it has best accuracy and less id switch in complex scene. I set “minTrackerConfidence” as 0. May I know how large the “maxShadowTrackingAge” parameter can be? Let’s say a video has around 2300 frames in total and the target car-1 may move out and back several times during the whole video which I am not sure of its duration between, can I set “maxShadowTrackingAge” as 2300 to achieve this need? Furthermore, what if the source is from ip camera stream instead of offline video, which we don’t know the specific duration, how to set the “maxShadowTrackingAge” to achieve this need?

My test result:
I use NvDCF tracker and set “maxShadowTrackingAge” as 2300. You can refer to the attached captures from the output video from DS5. The target object is “car 14”. After the car move back from left to right direction, it can only be tracked as “car 14” at the very right side of the frame, and cannot be tracked along the whole video as “car 14”, can you tell me if any other parameters need to be tuned too or other proper way?

For #2, Yes, your understanding is correct. How can I use the NvDCF tracker to achieve this need?

Eagerly awaiting your help, thanks!

What do you mean by “append the specific frame patch for the tracker to track on”? Do you mean you will change some content in the video and want tracker to track the object you paste on video?

Hi Fiona,
May I know any update on checking my test results according to your suggestion? Kindly please help with the above issues. Let me know if any other information needed.
Eagerly awaiting your timely reply.
Thanks!

Hello, if the car disappears on the right side of the image and re-appears on the left side of the image, it would mean that the object made a huge jump from the image plane standpoint. NvDCF defines a search region based on SearchRegionPaddingScale in config file, within which the same object is to be localized in the next frames. That is why you observe that Car-14 is re-associated when the car approaches where it disappeared before.

You can set SearchRegionPaddingScale to the max, but it wouldn’t cover the entire image. (Please check DS Doc at Gst-nvtracker — DeepStream 6.1.1 Release documentation for more details)

What you described is actually more like a Re-ID feature, which is something NvDCF tracker doesn’t currently support. You can implement your own custom plugin or module and expand what DS currently provides to enable such functionality.

Regarding patch, DeepStream’s trackers are not designed for the usecase where a user define a ROI and keep track of it as long as it can. Rather, the trackers are meant to keep tracker of objects that are periodically detected by the PGIE detector. So, if you really wanted to track some artificial object, you would need to modify the NvInfer metadata so that you add a fake object into NvInfer’s output metadata at every frame or at least periodically. If so, tracker would just think it is a detected object.

Hi pshin,

Thanks for your timely reply and help!

Do you mean I shall use the DS object tracker API in nvdstracker.h to custom my own tracker for the Re-ID feature? The thing I am not sure is, what is the lowest level module behind this API that my own tracker will interact with? Will my own tracker inherit those features of NvDCF tracker (such as shadowagetracking, visual features, association, etc) if I use this API?

I digged all the available documents related to NvDCF tracker, however information about the API is little for me. Your timely help will be really appreciated!

Thanks in advance.

No, I didn’t mean that you develop your own tracker by inheriting NvDCF. I advised that you can do some-sort-of Re-ID using the output from the NvDCF. For your usecase, there could be multiple tracklets for the same ground-truth object. Given the tracker output from DS pipeline in the metadata, you could implement your own algorithm for stitching such multiple tracklets. You can add it to the DS pipeline itself as a plugin or you can have your own module that runs separately in conjunction with DS pipeline, for example, an IOT-connected module.