Optical flow acceleration

I don’t have anymore the exact script I did to test this last month, but it was basically the example from the doc VPI - Vision Programming Interface: Pyramidal LK Optical Flow . The only add is the reset of the tracker at the end of the loop.

            ### tried to update the tracker with fresh features, but they are quickly lost
            curFeatures, scores = frame.harriscorners(strength=0.1, sensitivity=0.01)
            optflow = vpi.OpticalFlowPyrLK(frame, curFeatures, 4)

The script shown in my previous post illustrates exactly that and should work (just add the import and camera initialization and it will run)

However I’m facing an issue. As my camera is moving, I need to constantly provide new features to track. But I wasn’t able to do that:

  • trying to pass new features to optflow(frame ,newFeatures) only overwrite newFeatures with results
  • trying to reset at each frame the tracking by creating each time a new vpi.OpticalFlowPyrLK make the tracking lose almost all features in few seconds with no apparent reason (with a static camera), in addition to divide by 10 the performances…

I can’t find why. Is there a better way to do this? Or do you understand why the tracking is lost without reason?

with vpi.Backend.CUDA:
    optflow = None
    while 1:
        image = camera.getImage()

        frame = vpi.asimage(image, vpi.Format.BGR8).convert(vpi.Format.U8)

        if optflow is None:
            curFeatures, scores = frame.harriscorners(strength=0.1, sensitivity=0.01)
            optflow = vpi.OpticalFlowPyrLK(frame, curFeatures, 4)
        else:
            prevFeatures = curFeatures
            curFeatures, status = optflow(frame) # ,newFeatures) # seem to use newFeatures as result container
            
            print("% tracker lost:", status.cpu().nonzero()[0].shape[0]/(curFeatures.size+1))

            ### tried to update the tracker with fresh features, but they are quickly lost
            curFeatures, scores = frame.harriscorners(strength=0.1, sensitivity=0.01)
            optflow = vpi.OpticalFlowPyrLK(frame, curFeatures, 4)

derived from vpi example: VPI - Vision Programming Interface: Pyramidal LK Optical Flow

Notes:

  • I removed the feature filtering present in the example, as it was harming the performances without benefice
  • removing context switches from the loop (ex: with vpi.Backend.CUDA: ) greatly improve performances

Do you have any îdea of what I am doing wrong? Or how to get a continuous tracking with a moving camera using this implementation?

Thank you for your help

edit : I just did a test with opencv calcOpticalFlowPyrLKwith goodFeaturesToTrack using the same strategy (resetting markers each frame), and it works great. Also, to my surprise, I reached similar performances than the cuda vpi implementation while I’m pretty sure it runs on CPU. I may stick with the raw opencv approach if we can’t find the issue with my vpi test

Hi,

How fast does the camera move?
Do you know the expected x, y translation (in pixels) between two adjacency frames?

Thanks.

As mentioned several times, it was with a static camera, no movement, so no reason for any tracker loss …

Hi,

Could you try to use the CPU backend for tracking to see if it helps?
We found some issues in the VPI buffer which might be related.

Thanks.

Where can I find the announcement when this update is released?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.