Hi. I see that the feature_tracker_nvxcu provides an implementation of calculating optic flow as well as performing the harris_track.
There is another sample video_stabilizer that shows using these optic flow values to find a homography and warp perspective. However, these are vx (vision primitive APIs). Whereas the previous example uses nvxcu API. Is there a way for these two to interact?
The following will be the flow:
cuda_buffer (frame) -> nvxcu_image -> color coverting (nvxcu) -> LK features and optical flow (nvxcu) -> homography -> warping -> harris track (nvxcu).