I have an application that does inference and disparity mapping to place objects in 3D space that looks something like this:
- Start cuEGLStreamConsumer
- Get cudaEGLFrame
- Create CUsurfObjects (luma/chroma)
- Run cudaNV12ToRGBAf kernel
- Pass mapped RGBAf to jetson-inference
- Run cudaRGBAfToRGB kernel
- Declare a cv::gpu::GpuMat over the mapped rgb data
- Run gpu::remap with output to my vx_images
- VisionWorks disparity map graph
This works, and I get about 7fps at 1280x580, but I was wondering if I could eliminate the opencv step all together and just add the remapping into my visionworks graph.
This post says something about expressing the distortion as the inverse of an affine matrix. Is there any sort of visionworks tutorial somewhere on how to do camera calibration and undistortion, or should I just stick with OpenCV?
Also is there a VisionWorks forum? I looked through the options and none of them really stood out.