Does anyone have some sample code that uses the Median Flow calls such as nvxuMedianFlow
Cheers
Youngie…
Does anyone have some sample code that uses the Median Flow calls such as nvxuMedianFlow
Cheers
Youngie…
Hi,
You can find some information in our document:
VisionWorks API
NVIDIA Extension API
Vision Primitives API
Median Flow
nvxuMedianFlow
Thanks.
I have looked at the API but I can’t seem to find any information on how to use it. For instance, what is the difference between the Graph call and the immediate call. Are prev_pts and next_pts simply the values in in the matrix for an image? The output is a 3D vector? A simple example would be nice.
[in] context Specifies the context.
[in] prev_pts Specifies the input previous points list. Only VX_TYPE_KEYPOINT and NVX_TYPE_POINT2F item types are supported.
[in] next_pts Specifies the input next points list. It must have the same item type and number of items as prev_pts.
[in] pts_fb [optional] Specifies the backward points list. It must have the same item type and number of items as prev_pts.
[out] out Specifies the output median flow. It is a one element NVX_TYPE_POINT3F array. x and y fields of the first element represent estimated displacement, z field represents estimated scale change. In case of estimation failure (0,0,−1) is returned.
[in] estimate_scale Specifies whether to estimate scale change.
[in] filter_flow_by_err Specifies whether to filter out points with a high error value.
[in] error_fb_thresh [optional] Specifies the threshold for forward-backward errors. Pass a nonpositive value to disable forward-backward filtering.
Hi,
The information is included in our document.
Let us summary here:
1. Graph Mode v.s. immediate Mode
You can find some information here:
[i]>> VisionWorks API
Tutorials
VisionWorks Quick Start (Immediate Mode)
VisionWorks Quick Start (Graph Mode)[/i]
<b>Graph based execution</b>
Primitives are instantiated as graph nodes. The graph is built, verified and optimized ahead-of-time and can be executed multiple times without re-verification at run-time. The graph based execution should be prefered for vision pipelines that are executed multiple times (when processing a video stream for instance), as it will give best performance.
<b>Immediate execution</b>
Primitives are executed directly by calling a function (prefixed with vxu or nvxu) similar to the OpenCV or NPP execution mode. The immediate execution model is useful for one-time processing when the primitive setup overhead is not a big concern. It can also be useful as an intermediate step in application development, like when porting an application that uses OpenCV, for example.
2. prev_pts and next_pts
They are key points, indicating the feature point used for estimating motion.
The structure of key point can be found here:
https://www.khronos.org/registry/OpenVX/specs/1.1/html/d4/dae/group__group__basic__features.html#d6/db0/structvx__keypoint__t
3. output
The variable type is vx_array, which can the following type:
NVX_TYPE_POINT2F A nvx_point2f_t.
NVX_TYPE_POINT3F A nvx_point3f_t.
NVX_TYPE_POINT4F A nvx_point4f_t.
NVX_TYPE_KEYPOINTF A nvx_keypointf_t.
NVX_TYPE_STRUCT_MAX A floating value for comparison between structs and objects.
NVX_TYPE_OBJECT_MAX A floating value used for bound checking the VisionWorks object types
You can the sample located at /usr/share/visionworks/sources/samples/object_tracker_nvxcu/.
Although it is for optical flow, the usage should be similar.
Thanks.