Small size bbox KLT Feature Tracker

Hello,
I wanted to use the VPI library for the tracker in the specific KLT Feature Tracker, using the algorithm I realized that the maximum size of the Bounding Box is 64x64, I work with 2560x1440 images and every object I select has a big bbox and then it is lost immediately in the next frame, is there a way to be able to use bigger bounding boxes or track bigger objects?

https://docs.nvidia.com/vpi/algo_klt_tracker.html

Thanks.
Salvatore

Hi,

Could you set the parameter to allow larger scale change and translation change to see if helps first?
https://docs.nvidia.com/vpi/group__VPI__KLTFeatureTracker.html#structVPIKLTFeatureTrackerParams


struct VPIKLTFeatureTrackerParams

Structure that defines the parameters for vpiCreateKLTFeatureTracker.

Parameters
[in] thresholdUpdate threshold to update template
[in] thresholdKill threshold to kill tracking
[in] thresholdStop threshold to stop iteration
[in] maxScaleChange maximum scale change for valid tracking
[in] maxTranslationChange maximum translation change for valid tracking
[in] imageType input image type

Thanks.

Thanks for the answer but these parameters are for the variation of the Bbox not for the size, my problem is that the bboxes obtained by the detection are big type 250x540 - 54x110 - 145x180 and after having inserted them in the first frame of the algorithm, in the next frame the trackers are immediately lost, reading the documentation there is the constraint that the bboxes must be at least 4x4 and at most 64x64.

but the bbox of a person, car and other at medium distances does not have such a small bbox, I was wondering if there is a trick or a way to use larger bboxes.

Hi,

The parameter is related to the search range.
So it may help if the tracking miss is caused by the larger translation or scaling.

However, if your issue comes from the VPI tracking constraint.
You can try another tracking in the Deepstream:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvtracker.html

Thanks.

I understood that the parameters are used for translation and scale but with bboxes larger than 64x64 even on static objects it doesn’t work.

I’ve already switched from VisionWorks to VPI and I don’t want to switch to deepstream because in addition to tracking I do stereo correspondence.

Hi,

Let us check this possibility with our internal team first.
Thanks and sorry for the inconvenience.

Thank you, I’m waiting.
Sa

Hi,

Thanks for your patience.

We will try to add this to our future package.
Will let you know once it finished or released.

Thanks.

Thank you for your availability and help. I will wait for your release.