I was wondering whether it is possible (within reasonable limits) to use CUDA to change the parallax of an existing 3D image.
I have some stereo image pairs that have a parallax that is way too large, and some images where it is too small and needs enhancement.
Based on an optical flow algorithm I might try to compare the left and right image and modify the displacement of pixels based on their individual parallax (or separation). Or I might try to reconstruct a depth map, and based on that depth map re-render the image from slightly different camera perspective (this kind of stuff has been demonstrated with the Kinect sensor).
Do you think this approach could be feasible? Of course it is not possible to create a new complete camera perspective that is much different form when the image was originally taken. Things that were obscured from the camera will not magically pop up - how can the computer guess what is behind an object? But maybe I could interpolate over these imperfections - hopefully not totally destroying the quality of the image.
And the reasomn I am asking this here, I would want to use CUDA to do this at interactive frame rates. And CUDA and 3DVision make a nice couple.