Visionworks Motion Fields Extract

Hi,

I have been evaluating Visionworks Motion estimator. I could able to run the example in visionworks and also I could able to use Motion Estimator of my own. But I have one question about analyzing the motion field output vx_image.

When I ran the Motion Estimator example, it is showing motion area and arrows pointing to which direction the object is moving. The rendering part is in Render.

How could I extract the motion points and their direction of movement in my application? Could anyone help me here?

Thanks
Anil

Hi,

In /usr/share/visionworks/sources/demos/motion_estimation/main_motion_estimation.cpp:

IterativeMotionEstimator ime(context);
..
ime.process();
..
render->putMotionField(ime.getMotionField(), mfStyle);

ime will evaluate the motion vector for two adjacency frames and store them in the mfOutROI_.
You can access the motion field by calling ime.getMotionField().

The detailed structure of IterativeMotionEstimator is defined in the iterative_motion_estimator.hpp.

class IterativeMotionEstimator
{
public:
    ...
    vx_image getMotionField() const;
    ...

private:
    ...
    vx_image mfOutROI_;
    ...
};

Thanks.

Yes, I have implemented that and got the mfOutROI_ out but Is there any way to comprehend the result.

Like I saw the renderer is rendering arrows on each objects and their direction. I would like to interpret the mfOutROI_ to get the following

  1. Points/position of object
  2. Their angel of orientation or direction

as renderer do. So that I could use those for other purpose.

Could you help to understand those output information? I did not find any references for that.

Hi,

The mfOutROI_ contains a pixel-level motion field.
This indicates that value(x, y) is the motion of the point (x, y) from the current frame to the previous frame (backward motion).

More precisely, for point(x, y) with field value(mx, my) in frame N.
The matched point in the frame N-1 is (x+mx, y+my).

Thanks.