VisionWorks Object Tracking with Vision Primitives API

Hello everyone,
I need to develop a tracking algorithm for objects, I was following the example in the documentation, the following below

/usr/share/visionworks/sources/demos/feature_tracker

./nvx_demo_feature_tracker [options]

but this algorithm tracks all features and not just one object.I saw another example but made with VisionWorks-CUDA where tracking is done on objects.

/usr/share/visionworks/sources/demos/object_tracker_nvxcu/

Is there a possibility to do the same with the Vision Primitives API algorithm with graphs and nodes?

One solution would be to use the mask as input to the nvxFastTrackNode , but in the next frame where I don’t provide the mask the algorithm finds other points and tracks them all, so I should provide the mask every frame but how do I get it?

I want to select points on an object and track only the points of that object while it is moving.

Thanks
I’m waiting for some answers

Hi,

VPI only has a feature tracking algorithm:
https://docs.nvidia.com/vpi/algo_klt_tracker.html

A heuristics approach is to set the mask as a window.
And limit its scaling ratio (ex. [0.9, 1.1]) based on your use case to filter out other unwanted features.

Thanks.

So you can’t achieve this with VisionWorks?

Does the example of VisionWorks-CUDA not do this?

/usr/share/visionworks/sources/demos/object_tracker_nvxcu/

I ask you this because I’m already using the part of Semi-Global Matching with VisionWorks and I wanted to use this library to get the tracking as well, it’s done for all the points of the image I was wondering if there was some modification I could do to make the tracking on the points of some objects.

Instead of using this library VPI - Vision Programming Interface isn’t it better to use the example with VisionWorks-Cuda Object_tracker_nvxcu?

Thanks for answer

Hi,

Is there a possibility to do the same with the Vision Primitives API algorithm with graphs and nodes?

Sorry that I thought you are asking the same functionality in VPI.

You can use the object_tracker_nvxcu in VisionWorks.
It filters out the features based on the motion distance.

For more details, you can find it in our document below:

Thanks.

Thank you for your answer
Should I use the VisionWorks or VPI library?
consequently the algorithm object_tracker_nvxcu or VPI - : KLT Feature Tracker ?

I’m using VisionWorks for stereo correspondence, give me some advice.
Thanks

Hi,

Based on your use case, it’s more recommended to use VisionWorks.

But please noted that we don’t add new features into VisionWorks anymore.
This will limit you to request for a new algorithm if not available in VisionWorks currently.

Thanks.

Thanks for the answer, if I had to migrate everything to VPI, does the stereo correspondence algorithm have the same performance as VisionWorks?
Thanks

Hi,

We don’t have a benchmark report for VPI vs. VisionWorks.
But you should get the similar or better performance with VPI since we keep optimizing it.

Thanks.

Ok I will consider migrating to VPI thanks for your support and answers