Getting pointer from NvBuffer

Hello,

I am developing an application to do some video processing. For the use case, I am grabbing a video stream using the V4l2 direct interface (example 12_camera_v4l2_cuda).

In the end, my goal is to get a pointer in order to do some more processing with opencv and avoid memory copies.

The output of that example I mentioned is a NvBuffer file descriptor. What is the best way to get a pointer out of that? Is this the best way to get the best performance?

Thanks

Hi,
Please refer to the topics for aceesing the buffer in cv::Mat or cv::gpu::gpuMat:
NVBuffer (FD) to opencv Mat - #6 by DaneLLL
LibArgus EGLStream to nvivafilter - #14 by DaneLLL
Real-time CLAHE processing of video, framerate issue. Gstreamer + nvivafilter + OpenCV - #5 by Honey_Patouceul

Hi,

Thanks for the reply, first topic is what I was looking for. NVBuffer (FD) to opencv Mat - #6 by DaneLLL

However, I have a small difference and question. Since I am having a latency critical iamge application, I need to avoid color conversions, and my camera has YUV 422 format. Is there any way to map the buffer without converting it to RGB?

Thanks

Hi,
We know that BGR format is common in OpenCV. Not sure about YUV422 formats. If the formats are supported same as BGR, you can try the same to map NvBuffer to cv::Mat.

Hi,

Thanks for reply. Since I am interested in cv::GpuMat, I have followed the following approach based on the different forums that you have shared:

NvBuffer → EGLImage → EGLFrame → OpenCV Gpu Mat

Here is the example code:

/*  Convert the camera buffer from YUV422 to YUV420P */
            if (-1 == NvBufferTransform(cameraAcquireParams->g_buff[v4l2_buf.index].dmabuff_fd, cameraAcquireParams->render_dmabuf_fd,
                &transParams))
                ERROR_RETURN("Failed to convert the buffer");

            /* Create EGLImage from dmabuf fd */
            cameraAcquireParams->egl_image = NvEGLImageFromFd(cameraAcquireParams->egl_display, cameraAcquireParams->render_dmabuf_fd);
            if (cameraAcquireParams->egl_image == NULL)
                ERROR_RETURN("Failed to map dmabuf fd (0x%X) to EGLImage",
                    cameraAcquireParams->render_dmabuf_fd);

            /* Create EglFrame from EGL Image*/
            CUresult status;
            CUeglFrame myframe;
            CUgraphicsResource pResource = NULL;

            cudaFree(0);
            status = cuGraphicsEGLRegisterImage(&pResource, cameraAcquireParams->egl_image,
                CU_GRAPHICS_MAP_RESOURCE_FLAGS_NONE);
            if (status != CUDA_SUCCESS)
            {
                printf("cuGraphicsEGLRegisterImage failed: %d, cuda process stop\n",
                    status);
                break;
            }

            status = cuGraphicsResourceGetMappedEglFrame(&myframe, pResource, 0, 0);
            if (status != CUDA_SUCCESS)
            {
                printf("cuGraphicsSubResourceGetMappedArray failed\n");
                break;
            }

My question is, I do not really know if this would be the correct approach for handling the image into a Gpu Mat. Would this be correct? It seems like it is performing copy operations, but I really do not know this.

Hi,
The example code should work for RGBA. It is similar to
LibArgus EGLStream to nvivafilter - #14 by DaneLLL

For YUV422, you would need to try. Not sure if YUV422 is well supported in cv::gpu::gpuMat.

There is format conversion through hardware converter so throughput is good. For optimal performance of hardware converter, please refer to the post for disabling DFS:
Nvvideoconvert issue, nvvideoconvert in DS4 is better than Ds5? - #3 by DaneLLL

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.