I am using NvOFCuda in NV_OF_MODE_STEREODISPARITY mode.
I am able to compute the left-to-right disparity by setting the left image to image1 and the right image to image2 in NvOF::Execute. I was hoping that getting the right-to-left disparity was as easy as swapping image1 and image2, but that produces garbage output.
The algorithm seems to look for “left-to-right” disparities only. Is there a way to control the disparity search range so that “right-to-left” disparities can be found as well?
My current workaround is to horizontally flip the inputs, set left image to image2, set right image to image1, run NvOFCuda, then horizontally flip the output. I’m hoping for a more direct and efficient solution.
The behavior you are seeing is expected. When you use the NV_OF_MODE_STEREODISPARITY mode, the NV OFAPI assumes the image pair to be a LR, and does the search accordingly. Hence, if you feed a RL pair that may result to an incorrect output. This is done with the purpose of letting the engine do the full search in one direction which results in capturing larger disparities. In typical stereo use-cases the images are LR pairs.
In case you absolutely want to get rid of the workaround you are using, you may use “NV_OF_MODE_OPTICALFLOW” instead and discard the flow in the Y direction. However, you may lose the advantage of capturing larger disparities as the hardware engine in this case will do the search in all directions.
Thanks for your response. Yes, I’ve considered using NV_OF_MODE_OPTICALFLOW mode, but I was hoping to keep the search in 1D.
Would it be possible for NVidia to expose the disparity search range to the NV OFAPI? For example, (min=0, max=255) would be a typical LR search, (min=-255, max=0) would be the equivalent RL search, (min=-255, max=255) would be a search in both directions.
This would be very handy for computing both LR and RL disparities.
Thank you for the feedback. We may consider this in one of our future SDK release.