Hi,
Trying to perform camera undistortion using VPI with VIC backend in gstreamer combining two examples, /opt/nvidia/vpi2/samples/11-fisheye and nvsample_cudaprocess. In the nvsample_cudaprocess.cu gpu_process function using vpiSubmitRemap similar to 11-fisheye/main.cpp main function, but first attempting to convert EGLImageKHR to VPIImage using vpiImageCreateEGLImageWrapper. Following snippet part of nvsample_cudaprocess.cu gpu_process used to work with vpi1
vpiImageCreateEGLImageWrapper(image, NULL, VPI_BACKEND_CUDA, &vimg);
vpiSubmitConvertImageFormat(stream, VPI_BACKEND_CUDA, vimg, tmpIn, NULL);
vpiSubmitRemap(stream, VPI_BACKEND_CUDA, remap, tmpIn, tmpOut, VPI_INTERP_CATMULL_ROM, VPI_BORDER_ZERO, 0);
vpiSubmitConvertImageFormat(stream, VPI_BACKEND_CUDA, tmpOut, vimg, NULL);
vpiStreamSync(stream);
using
gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1’ ! nvivafilter cuda-process=true customer-lib-name=“libnvsample_cudaprocess.so” ! ‘video/x-raw(memory:NVMM), format=(string)NV12’ ! nvoverlaysink display-id=0 -e
Problem is that vpi/EGLInterop.h is missing in vpi2 so it doesn’t work, is there a substitute for or sample of equivalent vpiImageCreateEGLImageWrapper in vpi2? There is a similar topic VPI in a GStreamer pipeline but likely using vpi1.
Side issue with Remap is that 1080p NV12 latency using CUDA backend is favorable to VIC but it doesn’t compare percentage of GPU utilized?
Thanks.