Convert OpenCV maps to VPI WarpMap

Hi,

I have an OpenCV program that includes some warping operations, which I want to speed up using VPI (CUDA backend) on my AGX Xavier. For operations like undistortion and perspective warping, I can use VPI functions directly. However, for something like a Spherical Warp, VPI has no inbuilt functions. Therefore, I am creating the map from OpenCV’s warpers, and now I need to convert it to a VPI WarpMap so that I can use it with the remap function.

Since OpenCV provides 2 maps (map_x and map_y), I need a way to feed those same values into a VPI WarpMap with a dense WarpGrid of the same size.

I would appreciate any help in making this conversion.

Hi,

Suppose OpenCV can do the Spherical warp directly.

Would you mind providing the sample in OpenCV?
So we can check how to generate the warp grid based on the x-axis and y-axis maps.

Thanks.

This is a sample of the OpenCV code:

GpuMat input, output;

Mat map_x, map_y;
GpuMat g_mapx, g_mapy;
cv::Ptr<cv::WarperCreator> warper_creator = cv::makePtr<cv::SphericalWarper>();
cv::Ptr<cv::detail::RotationWarper> warper = warper_creator->create(f);
warper->buildMaps(cv::Size(inputWidth, inputHeight), K, R, map_x, map_y);
g_mapx.upload(map_x);
g_mapy.upload(map_y);

cv::cuda::remap(input, output, g_mapx, g_mapy, cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::Scalar(0));

The buildMaps function from the SphericalWarper class gives me the two maps map_x and map_y. Both maps have the same dimensions as the output image size (not equal to the input image size). The value f used in creating the warper determines the output image size.

Now instead of using the cv::cuda::remap function, I would like to use the VPI Remap. My guess is each corresponding pair from the x and y map should give me the keypoints for the WarpMap in VPI.

Hi,

It seems that OpenCV describes the algorithm as forward-mapping but you are using the backward version.
https://docs.opencv.org/4.1.1/d1/da0/tutorial_remap.html

You can find an example to define a dense map grid below.

https://docs.nvidia.com/vpi/algo_remap.html

Please create a map with numHorizRegions/numVertRegions regionWidth/regionHeight that is identical to your map size.
And set up the x and y values accordingly.

For example:

VPIWarpMap map;
memset(&map, 0, sizeof(map));
map.grid.numHorizRegions  = 1;
map.grid.numVertRegions   = 1;
map.grid.regionWidth[0]   = w;
map.grid.regionHeight[0]  = h;
map.grid.horizInterval[0] = 1;
map.grid.vertInterval[0]  = 1;
vpiWarpMapAllocData(&map);
...
vpiWarpMapGenerateIdentity(&map);
int i;
for (i = 0; i < map.numVertPoints; ++i)
{
    VPIKeypoint *row = (VPIKeypoint *)((uint8_t *)map.keypoints + map.pitchBytes * I);
    int j;
    for (j = 0; j < map.numHorizPoints; ++j)
    { 
            row[j].x = ...;
            row[j].y = ...;
    }
}

Thanks.

I’m not sure what you mean by I’m using the backward version, but yes, I will try out feeding keypoints to the WarpMap from the OpenCV generated map like you’ve suggested above. Thanks.

In the documentation for VPI, it says that the numHorizRegions/numVertRegions cannot exceed 4. The example codes show that for creating a dense grid we should have 1 region with regionWidth and regionHeight set to the w and h respectively.

However in your example code above, you have set the numHorizRegions and numVertRegions to w and h respectively. Just want to confirm which way is right.

Hi,

Sorry for the confusion.
Please follow the document example to generate the dense grid.

To define a dense map, simply set up the warp grid to have just one region, and both horizontal and vertical spacing to 1.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.