I’ve been trying to do remapping operations ( NON- Linear Transformations ) e.g. Spherical Perspective, Lens Undistortion,etc. So far I’ve tryed OpenCV 2 and 3 on the jetson, yes, I’ve compiled so far like 10 times opencv in different versions on the TX1, used the opencv 2.4 4 tegra, ( which dont’ support cameras with different Color Configurations, and proper V4l2 Controls, cannot change many parameters ) and also tried to do it with VisionWorks, but at the end the results we are looking for are not good enough, since the cameras are not fully calibrated ( undistorted ) , and my first attempts to do this with VisionWorks, I haven’t found a way to do this efficiently, and hence the question:
Is there a optimized way of building a Non-Linear Transformation kernel that can give results of RealTime processing ( 15 -30 fps ) and include it in the Processing Graph on VisionWorks ?.
My problem usually applies to multiple sensors with wideangle/fisheye cameras ( multiple cameras ), so that is why the performance usually decreases, the reason for multiple cameras, and wide angles is to get 360 degree view.
Additionally, I’ve also tried to build my own remap Table, but without cuda, it seems to take too long for Real Time Purposes, any insights ??? ( I don’t know Cuda, yet )