How to pass to hardware encoder from OpenCV

I’m trying to image processing application.
My application abstract is as below.

  1. Capture camera device
  2. Convert color format
  3. image processing
  4. Encode with HW encoder

No.1. to 3. will be implemented using OpenCV.
What i want to achieve is passing the result of image processing(e.g.overlay image with bounding box) to HW encoder.
I refered jetson_multimedia_api_reference but I confuse.
What is the best solution ?

HW: JetsonNX development Kit , SW : Jetpack4.4

thank you.

Please refer to this sample:
Displaying to the screen with OpenCV and GStreamer - #9 by DaneLLL

I tried this one. But I need to take encoder output data for post-processing.
I consider to use Multimedia api.
Do you have any information ?

There are patches demonstrating MMAPI + cv::Mat/cv::gpu::gpuMat. Please take a look at
NVBuffer (FD) to opencv Mat - #6 by DaneLLL
LibArgus EGLStream to nvivafilter - #7 by DaneLLL

Thanks for your reply.
But my camera is V4L2. Is it possible to capture v4l2 camera ?

Please look at 12_camera_v4l2_cuda. The sample demonstrates v4l2 capture.

From opencv, you can use a gstreamer pipeline as VideoWriter:

// Get resolution and framerate from capture
unsigned int width = cap.get (cv::CAP_PROP_FRAME_WIDTH);
unsigned int height = cap.get (cv::CAP_PROP_FRAME_HEIGHT);
unsigned int fps = cap.get (cv::CAP_PROP_FPS);

// Create the writer with gstreamer pipeline encoding into H264, muxing into mkv container and saving to file 
cv::VideoWriter gst_nvh264_writer("appsrc ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h264enc ! video/x-h264,format=byte-stream ! h264parse ! matroskamux ! filesink location=test-nvh264-writer.mkv ", 0, fps, cv::Size (width, height));
  if (!gst_nvh264_writer.isOpened ()) {
    std::cout << "Failed to open gst_nvh264 writer." << std::endl;
    return (-6);

and in the loop, push your processed frames (one for each captured frame) with:


Thank you for DaneLLL and Honey_Patouceul.
I archived importing image from V4L2 device.
Next, I’ll try some image processing and encode with H.264.
I consider to use NvVideoEncoder in MMAPI.
How could I pass cv::Mat to encoder ?

You are not able to pass cv::Mat to encoder. Need to create NvBuffer and copy your cv::Mat data into it.

yes, I see.
my way is as below:

  1. create NvBuffer for importing cv::Mat(ARGB32)
  2. create NvBuffer for YUV420M to be matching encoder input format.
  3. create NvVideoEncoder
    [main process]
  4. dequeue output plane buffer of encoder
  5. convert ARGB to YUV420M
    3, enqueue thebuffer(2.) into output plane buffer of encoder

hmm, I tried cv::Mat into NvBuffer.
Is the below code is correct ?
I think that if par,num_planes euals 1, this is correct.
But this way seems to be ineffective.

cv::Mat src = cv::Mat::zeros(height, width, CV_8UC4);
ret = NvBufferGetParams(fd, par);    //fd is V4L2_PIX_FMT_ABGR32
for( unsigned int plane = 0; plane < par.num_planes; plane++){
    ret = NvBufferMemMap(fd, plane, NvBufferMem_Write, &vaddr);  // vaddr is void*
    if(ret == 0){
        for( unsigned int i = 0; i < par.height[plane]; i++){
            memcpy((uint8_t *)vaddr + i *par.pitch[plane],[ i * src.step], src.width * sizeof((uint8_t)) * 4);
    NvBufferMemSyncForDevice (fd, plane, &vaddr);

Since the main format in OpenCV is BGR, which is not well supported on Jetson platforms. After the processing, you would need to convert it to RGBA, copy to NvBuffer, convert to YUV420 and do encoding. It may not bring good performance. There are CUDA filters which can be applied to RGBA buffer. If you can use the CUDA filters in your usecase, performance can be better.

Yes, I think so.
I’m thinking of using OpenCV with CUDA.
Does “CUDA filter” mean OpenCV with CUDA?

I tried to

  1. convert BGR to ARGB.(cvtColor) / Done
  2. copy to NvBuffer/Done
  3. transform AGBR to YUV420 (NvTransform) / Done
  4. pass to encoder / now trying

In 4. , I have fd that contains YUV420 data.
I understand that dqueue and queue for encoder.
If I use the code as below, how should I pass data ?

if (-1 == NvBufferTransform(in_fd, out_fd, &transParams)){    // out_fd contains YUV420
    ERROR_RETURN("Failed to convert the buffer");
struct v4l2_buffer v4l2_output_buf;
struct v4l2_plane output_planes[MAX_PLANES];
NvBuffer *outplane_buffer = NULL;
memset(&v4l2_output_buf, 0, sizeof(v4l2_output_buf));
memset(output_planes, 0, sizeof(output_planes));
v4l2_output_buf.m.planes = output_planes;
ret = enc->output_plane.dqBuffer(v4l2_output_buf, &outplane_buffer, NULL, 10);
// something to do and enqueue

Thank you.

I suppose @DaneLLL was referring to nvivafilter. This gstreamer plugin can take a custom library for doing GPU processing on NVMM frames. You can use opencv cuda with it.
You may find some links from this post.

For using CUDA filter, function calls are like:

    //CUDA postprocess
        EGLImageKHR egl_image;
        egl_image = NvEGLImageFromFd(egl_display, dmabuf_fd);
        CUresult status;
        CUeglFrame eglFrame;
        CUgraphicsResource pResource = NULL;
        status = cuGraphicsEGLRegisterImage(&pResource,
        if (status != CUDA_SUCCESS)
            printf("cuGraphicsEGLRegisterImage failed: %d \n",status);
        status = cuGraphicsResourceGetMappedEglFrame(&eglFrame, pResource, 0, 0);
        status = cuCtxSynchronize();
        if (create_filter) {
            filter = cv::cuda::createSobelFilter(CV_8UC4, CV_8UC4, 1, 0, 3, 1, cv::BORDER_DEFAULT);
            //filter = cv::cuda::createGaussianFilter(CV_8UC4, CV_8UC4, cv::Size(31,31), 0, 0, cv::BORDER_DEFAULT);
            create_filter = false;
        cv::cuda::GpuMat d_mat(h, w, CV_8UC4, eglFrame.frame.pPitch[0]);
        filter->apply (d_mat, d_mat);

        status = cuCtxSynchronize();
        status = cuGraphicsUnregisterResource(pResource);
        NvDestroyEGLImage(egl_display, egl_image);

The dmabuf_fd is in RGBA format. After the processing, you can convert it back to NV12 and send to hardware encoder.