VisionWorks Equivalent to OpenCV gpu::remap

I have an application that does inference and disparity mapping to place objects in 3D space that looks something like this:

  1. Start cuEGLStreamConsumer
  2. Get cudaEGLFrame
  3. Create CUsurfObjects (luma/chroma)
  4. Run cudaNV12ToRGBAf kernel
  5. Pass mapped RGBAf to jetson-inference
  6. Run cudaRGBAfToRGB kernel
  7. Declare a cv::gpu::GpuMat over the mapped rgb data
  8. Run gpu::remap with output to my vx_images
  9. VisionWorks disparity map graph

This works, and I get about 7fps at 1280x580, but I was wondering if I could eliminate the opencv step all together and just add the remapping into my visionworks graph.

This post says something about expressing the distortion as the inverse of an affine matrix. Is there any sort of visionworks tutorial somewhere on how to do camera calibration and undistortion, or should I just stick with OpenCV?

Also is there a VisionWorks forum? I looked through the options and none of them really stood out.


Looks like you have integrated lots of frameworks.

It’s recommended to rewrite jetson_inference to read image and rendering.
More frameworks integration may cost you a memcopy when interpretation.

VisionWorks targets for vision basic operation and doesn’t have camera calibration implementation.
You can realize it with our feature detector and matching API.

We don’t have specific VisionWorks board. You can post questions based on the system you used.