I would like to efficiently compute a dense optical flow (ideally in python) on the Orin. The NVENC seems a good way to do so, but vpi doesn’t implement it for the Orin. The Orin documentation also mention the OFA which seems great, but I can’t find any information on how to use it.
Is there a way to benefit from the hardware acceleration offered by the Orin to compute optical flows?
Is there a planning to know when the acceleration of optical flow with the OFA (or even NVENC) will be available?
I can’t progress anymore on my software because I need realtime dense opticalflow (ideally not on gpu already used by other algorithms).
I need to know when it will be available to evaluate the impact on my planning. It is a bit frustrating to have the hardware available to perform it but not being able to use it due to lack of software/documentation.
I hope it will be soon available to allow me to use the full potential of this great module! (and also to be in time with my schedule)
Can you at least tell if this is a question of several weeks? few month? Or even longer term?
I would like to know if I can hope to have it for my project or if I need to urgently find an alternative solutions …
Thank you for your answer.
It’s not ideal as I’m already using the GPU for other algorithms in parallel, but at least now I know that I have to find a way to do so.
Do you know if the “NvidiaHWOpticalFlow” of OpenCV is compatible with the AGX ORIN?
To get around this issue, I tried to do a sparse optical flow using the vpi implementation on cuda backend available for the AGX ORIN.
However I’m facing an issue. As my camera is moving, I need to constantly provide new features to track. But I wasn’t able to do that:
trying to pass new features to optflow(frame ,newFeatures) only overwrite newFeatures with results
trying to reset at each frame the tracking by creating each time a new vpi.OpticalFlowPyrLK make the tracking lose almost all features in few seconds with no apparent reason (with a static camera), in addition to divide by 10 the performances…
I can’t find why. Is there a better way to do this? Or do you understand why the tracking is lost without reason?
with vpi.Backend.CUDA:
optflow = None
while 1:
image = camera.getImage()
frame = vpi.asimage(image, vpi.Format.BGR8).convert(vpi.Format.U8)
if optflow is None:
curFeatures, scores = frame.harriscorners(strength=0.1, sensitivity=0.01)
optflow = vpi.OpticalFlowPyrLK(frame, curFeatures, 4)
else:
prevFeatures = curFeatures
curFeatures, status = optflow(frame) # ,newFeatures) # seem to use newFeatures as result container
print("% tracker lost:", status.cpu().nonzero()[0].shape[0]/(curFeatures.size+1))
### tried to update the tracker with fresh features, but they are quickly lost
curFeatures, scores = frame.harriscorners(strength=0.1, sensitivity=0.01)
optflow = vpi.OpticalFlowPyrLK(frame, curFeatures, 4)
I removed the feature filtering present in the example, as it was harming the performances without benefice
removing context switches from the loop (ex: with vpi.Backend.CUDA: ) greatly improve performances
Do you have any îdea of what I am doing wrong? Or how to get a continuous tracking with a moving camera using this implementation?
Thank you for your help
edit : I just did a test with opencv calcOpticalFlowPyrLKwith goodFeaturesToTrack using the same strategy (resetting markers each frame), and it works great. Also, to my surprise, I reached similar performances than the cuda vpi implementation while I’m pretty sure it runs on CPU. I may stick with the raw opencv approach if we can’t find the issue with my vpi test
What kind of flow do you want to get?
In general, the sparse optical flow will give you the foreground (object) motion.
But the dense estimate is better in the background flow.
If your camera moves fast, it’s possible that the tracking is lost due to blur.
Ideally I would like a dense optical flow, however this is currently not available with vpi, and I don’t want to consume my GPU resources doing it (I have other algorithms in parallel).
So I’m experimenting with a lighter approach based on a sparse optical flow tracking, with trackers regularly updated to keep track of the movement. This should do the trick after some adaptations of my algorithm (and some precision concessions).
This strategy seem to work with the regular opencv implementation, but I wasn’t able to make the vpi implementation work properly.
Note that the tracking loss I was describing previously were in a static situation (no camera movement), and fresh trackers are instantly lost with no apparent reason.
I learnt the hard way that the memory shared by different vpi object can be misleading sometimes (and vpi doesn’t offer an easy way to copy objects without translating them in numpy on cpu). Maybe there are such mechanism internally which make the reinitialisation of new trackers not happening as it should.
Note also that the opencv CPU implementation seems performant enough for my needs. It would be interesting to understand why the vpi implementation doesn’t behave as expected, but it does not block me in my work, this is not a priority.
The code is an extract from the vpi example ( VPI - Vision Programming Interface: Pyramidal LK Optical Flow) and the opencv implementation I currently use (extracted from opencv example) is similar (I just use goodfeaturestotrack instead of Harris, which gives more robust features to track).
In my understanding, OpticalFlowPyrLK is based on the computation of the opticalflow of the local area to follow a feature. Thus, at my knowledge, we don’t need a feature descriptor. Haris (or goodfeaturestotrack) only indicates area with good texture characteristics on which the opticalflow tracking will be robust enough.
Yes, sorry. The LK tracker uses position as input.
Updated my previous comment as well.
Have you checked the output of harris?
If the scenario is stationary, does the algorithm detect the corner in the close position?
This helps to figure out whether the issue is from the tracker or the detector.
For what I remember (now I’m working with the opencv implementation), I don’t think the Harris had an issue. Even if the Harris was wrong, that won’t explained the observed behavior. Regarding the previously described issues, it points toward an issue with the tracker.