Convert NvBuffer to cv::Mat

I am using the example code for the Nvidia jpeg decoder and after the call to decodeToBuffer() I am wondering how can I convert the resulting NvBuffer to cv::Mat. I tried assigning the data pointer in the cv::Mat to the buffer->planes[0].data which is supposed to be the pointer to the begging of the memory in the NvBuffer but I keep on getting segfaults in the cv::Mat code.

You would need to create RGBA NvBuffer and then call NvBufferTransform() to convert the decoded YUV to RGBA. Please refer to this patch for mapping RGBA NvBuffer to cv::Mat:
NVBuffer (FD) to opencv Mat - #6 by DaneLLL


Thank you for the answer.

After seeing this I was wondering why do I need to trasnform the NvBuffer to RGBA before I create the cv::Mat. Can I not just use cv::Mat that is in the YUV color space and then use the open cv functions to make the color space transfer.


Since NvBuffer is hardware DMA buffer so the YUV420 is in 3 planes and there is alignment between each plane. Please check the example of YUV420 in 1920x1080 in
Memory for NvMap - #10 by DaneLLL

YUV420 in cv::Mat is contiguous so the data in NvBuffer cannot be mapped to it directly. And we suggest convert to single plane RGBA.

Thank you so much for the quick answer!

One last question, does that mean that I can use the decodeToFd() function and use the resulting FD to make the NvBufferTransform() and make my own RGBA NvBuffer from that.

Also the buffer returned from decodeToBuffer() seems to be a software buffer from the documentation of the function. Would I still need the NvBufferMemSyncForCpu() call in that case.

Please call decodeToFd() to get decoded NvBuffer, There is reference code in 12_camera_v4l2_cuda for MJPEG decoding. Please check the sample and grep decodeToFd().


When I try to create a ARGB NvBuffer in order to do the BufferTransform I get a segfault but I’ve done it exactly like it is in the example from 13_multi_camera

        int * nvbuffer = 0;
        NvBufferCreateParams input_params;
        input_params.width = width;
        input_params.height = height;
        input_params.layout = NvBufferLayout_Pitch;
        input_params.colorFormat = NvBufferColorFormat_ABGR32;
        input_params.nvbuf_tag = NvBufferTag_JPEG;
        NvBufferCreateEx(nvbuffer, &input_params);

The error I get before the segfault is

nvbuf_utils: nvbuffer Payload Type not supported

You may do memset to input_params to make sure there’s no garbage data. And call like

int fd=0;

If the issue is still present, suggest you connect TX2 to a USB camera and try to successfully run 12_camera_v4l2_cuda as a reference.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.