Convert Argus EGL image buffer to opencv mat

I need to get the argus camera image in RGB format.

I am getting the argus image like this, and want to convert the YUV image to RGB.

Image *image = iFrame->getImage();
IImage *iImage = interface_cast<IImage>(image);
IImage2D *iImage2D = interface_cast<IImage2D>(image);

From here I am trying to create the Y and UV mats, but the data is in some weird format

IImage(2D): buffer 0 (4032x3040, 4032 stride, 12451840 bytes
IImage(2D): buffer 1 (2016x1520, 4032 stride), 6291456 bytes

And OpenCV Mat is not able to read the data.

Can you give me some idea of how this buffer is structured and if there is a way to create OpenCV Mat from this?

Do you check the yuvJpeg sample in the MMAPI.

Thanks

Thank you for your response.

Yes, I am exactly using that example as a starting pint.

It uses JPG encoder to save the image in a file, but I need to get the data real-time and avoid encoding latencies, unfortunately I did not find any examples of reading the RGB data from IImage2d.

There is another example which requests RGB data instead (demosaicOutput), which uses PIXEL_FMT_LegacyRGBA but this does not work for getting more than one frame, in the same example for the setting PIXEL_FMT_YCbCr_420_888, works properly in a loop, but RGBA format freezes on the second iFrameConsumer->acquireFrame request.

How about below topic.

Thank you for your response, the link there seems corrupted.

here is another thing I was trying

            Image *image = iFrame->getImage();
            IImage *iImage = interface_cast<IImage>(image);
            IImage2D *iImage2D = interface_cast<IImage2D>(image);

            int planeIdx = 0;
            const uint8_t *d = static_cast<const uint8_t *>(iImage->mapBuffer(planeIdx));

            if (!d)
                ORIGINATE_ERROR("\tFailed to map buffer\n");

            Size2D<uint32_t> size = iImage2D->getSize(planeIdx);
            

            cv::Mat yChannel(size.height(), size.width(), CV_8UC1 , (void *)d, iImage2D->getStride(planeIdx));
            cv::imwrite("test.jpg", Channel);

And getting a result like this.

is there any known documentation samples, how the data is stored in image buffers?

Hi,
Please check this patch and apply to your application:
How to create OpenCV cv::Mat from NvBuffer in Jetpack 5.1 - #8 by DaveYYY

Thank you for your response, this was very helpful, I was able to get the ARGB image.

Since I need to get the best performance I want to avoid any copies and color conversions,

Is there a way to use the YUV_444 format and then construct opencv cuda GPU mat in YUV format and avoid color conversions, ideally even extra copies?

I tried using the NVBUF_COLOR_FORMAT_YUV444color format, but getting an error at reading the the data. I assume this is because YUV444 is multi-planar but not sure if there is a fast way to construct a GPU mat from YUV444 data.

Thank you for the help.

Hi @DaneLLL

I’d really appreciate if you could make any comments about this

Hi,
Since NvBufSurface in YUV444 is multi planar, it may not be mapped to OpenCV directly. Would suggest use RGBA. For optimal performance, please run the script:
VPI - Vision Programming Interface: Performance Benchmark

So that hardware converter is fixed at maximum clock.

Hi @DaneLLL

Thank you for your response, this is very helpful.

Are there examples of how I can get opencv GPU mat without any extra copies?

Thank you

Hi,
Please refer to the patches for mapping NvBufSurface to gpuMat:
Error generated while running the code after connecting the camera - #15 by DaveYYY
How to create opencv gpumat from nvstream? - #18 by DaneLLL

Hi @DaneLLL

Thank you,

I got the RGBA mat, but the conversion and especially registering the image cuGraphicsResourceGetMappedEglFrame are a bit slow since I have 3 4032x3040 videos at 30FPS.

I have another question about using the YUV format,

In my case, it’s not even mandatory to create an opencv Mat as long as I can access channel data with Cuda. I can modify my processing to expect each channel separately as long as they are available in Cuda.

Are there any resources on how I can access each channel without copying them to CPU and then moving it back?

Hi,
We have the demonstration of accessing NvBufSurface through CUDA in jetson_multimedia_api samples. Please check
Encode frames using v4l2 and NvBufSurface - #7 by DaneLLL

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.