NV Multimedia API with OpenCV

Hello, I’m still struggling with my Jetson Nano 2GB
I have 2 cameras :

Camera 0 :

VIDIOC_ENUM_FMT
Index       : 0
Type        : Video Capture
Pixel Format: 'MJPG' (compressed)
Name        : Motion-JPEG
	Size: Discrete 1920x1080
		Interval: Discrete 0.017s (60.000 fps)
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.040s (25.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 1600x1200
		Interval: Discrete 0.017s (60.000 fps)
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.040s (25.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 1360x768
		Interval: Discrete 0.017s (60.000 fps)
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.040s (25.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 1280x1024
		Interval: Discrete 0.017s (60.000 fps)
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.040s (25.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 1280x960
		Interval: Discrete 0.017s (60.000 fps)
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.040s (25.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 1280x720
		Interval: Discrete 0.017s (60.000 fps)
		Interval: Discrete 0.020s (50.000 fps)
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 1024x768
		Interval: Discrete 0.017s (60.000 fps)
		Interval: Discrete 0.020s (50.000 fps)
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 800x600
		Interval: Discrete 0.017s (60.000 fps)
		Interval: Discrete 0.020s (50.000 fps)
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 720x576
		Interval: Discrete 0.017s (60.000 fps)
		Interval: Discrete 0.020s (50.000 fps)
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 720x480
		Interval: Discrete 0.017s (60.000 fps)
		Interval: Discrete 0.020s (50.000 fps)
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 640x480
		Interval: Discrete 0.017s (60.000 fps)
		Interval: Discrete 0.020s (50.000 fps)
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
Index       : 1
Type        : Video Capture
Pixel Format: 'YUYV'
Name        : YUYV 4:2:2
	Size: Discrete 1920x1080
		Interval: Discrete 0.200s (5.000 fps)
	Size: Discrete 1600x1200
		Interval: Discrete 0.200s (5.000 fps)
	Size: Discrete 1360x768
		Interval: Discrete 0.125s (8.000 fps)
	Size: Discrete 1280x1024
		Interval: Discrete 0.125s (8.000 fps)
	Size: Discrete 1280x960
		Interval: Discrete 0.125s (8.000 fps)
	Size: Discrete 1280x720
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 1024x768
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 800x600
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
		Interval: Discrete 0.200s (5.000 fps)
	Size: Discrete 720x576
		Interval: Discrete 0.040s (25.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
		Interval: Discrete 0.200s (5.000 fps)
	Size: Discrete 720x480
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
		Interval: Discrete 0.200s (5.000 fps)
	Size: Discrete 640x480
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
		Interval: Discrete 0.200s (5.000 fps)

Camera 1 :

VIDIOC_ENUM_FMT
Index       : 1
Type        : Video Capture
Pixel Format: 'YUYV'
Name        : YUYV 4:2:2
	Size: Discrete 1920x1080
		Interval: Discrete 0.033s (30.000 fps)

If I open them via GStreamer :
For camera 0:

$ gst-launch-1.0 tee name=stream v4l2src device=/dev/video0 ! image/jpeg,width=1280,height=720,framerate=60/1 ! jpegparse ! jpegdec ! xvimagesink sync=false

For camera 1:

$ gst-launch-1.0 v4l2src device=\"/dev/video1\" ! xvimagesink 

Both are working perfectly fine - there’s no lag
I somehow successfully included that GStreamer inside OpenCV C++ code, with help here (OpenCV camera lag), but the quality is still not good.
I even tried to use that weird GStreamer API ( Basic tutorials ) with no improvement.

While searching for any solution, I encountered NV Multimedia API, with examples that work amazingly (finally something that uses GPU) and are using c++ code. The only problem is that that’s API is terribly looking, there’s no good documentation (Jetson Linux API Reference: Main Page | NVIDIA Docs) and even examples are too overcomplicated to use for study. I also need to provide some operations on video, my best bet is OpenCV and in Readme for samples is something like this:

The included examples demonstrate how to do image processing with
CUDA, object detection and classification with cuDNN, TensorRT, and OpenCV usage

But no sample is really using OpenCV, there’s like one example with cv::Rect - not really helpful. Are there any tutorials about this API, or some good examples, that for example use NvMAPI to read camera, OpenCV to transform an image (change colors / stabilize or smth) and show with imshow?

Maybe someone has some idea of another solution to my problem? I heard that some people solved some problems using ROS, but I don’t quite understand how - anyone knows something?

Hi,
We have samples in the source code package:
L4T Driver Package (BSP) Sources
Please download it and give it a try. The samples are to get BGR buffer from gstreamer pipeline.

And there are samples of using CUDA filters in OpenCV. Please check

Hello @DaneLLL, thanks for your help.
Which of these examples should I check for my problem?
Both of that Github links in these two posts are not working so I will try this : JEP/install_opencv4.5.0_Jetson.sh at master · AastaNV/JEP · GitHub

Let’s change a question a little:
I like how example 12_camera_v4l2_cuda works, I would love to :

  • add another camera (so the program will capture two cameras in the same way, at the same time),
  • save buffers from both cameras into cv::Mat, use OpenCV to convert frames to grayscale.
  • and show them somehow (best through cv::imshow, but right now I’m open to anything that works) - all of that in real-time.

I tried to apply the methods mentioned here:
83012 | 83012 | 111013 | and here

With no success…
Pretty please for step by step or code response if possible, I’m not so professional, unfortunately. . .

Hi,
You can check this patch:

The flow is
1.Get NvBuffer in RGBA, ptch-linear
2. Map to CPU
3. Map the CPU pointer to cv::Mat

Yeah, that’s the one I tried.
But in example 12 it’s not exactly working
Hare what I got, I added my code in line ~686.

            }
        cuda_postprocess(ctx, ctx->render_dmabuf_fd);

        /* Preview */
            // ctx->renderer->render(ctx->render_dmabuf_fd);
            void *pdata = NULL;
            NvBufferParams params;
            NvBufferGetParams(ctx->render_dmabuf_fd, &params);
            NvBufferMemMap(ctx->render_dmabuf_fd, 0, NvBufferMem_Read, &pdata);
            NvBufferMemSyncForCpu(ctx->render_dmabuf_fd, 0, &pdata);

            cv::Mat picYV12 = cv::Mat(ctx->cam_h, ctx->cam_w, 0, pdata);
            cv::Mat picBGR;
            cv::cvtColor(picYV12, picBGR, cv::COLOR_YUV2BGR_YV12); //COLOR_YUV2BGR_YV12  COLOR_RGBA2BGR

            NvBufferMemUnMap(ctx->render_dmabuf_fd, 0, &pdata);


                cv::imshow("img", picBGR);
            }
            
            cv::waitKey(1);

        /* Enqueue camera buffer back to driver */
        if (ioctl(ctx->cam_fd, VIDIOC_QBUF, &v4l2_buf))
            ERROR_RETURN("Failed to queue camera buffers: %s (%d)",
                    strerror(errno), errno);
    }

Depending how I change cvtColor, I get diffrend results :

cv::cvtColor(picYV12, picBGR, cv::COLOR_YUV2BGR_YV12) :
Screenshot from 2021-01-11 13-35-17

cv::cvtColor(picYV12, picBGR, cv::COLOR_RGBA2BGR) :

( There’s my hand in the picture )

And I have no idea how to add another camera here

Sample 13_multi_camera is not working for me, it doesn’t detect any camera.

Hi,
The render_dmabuf_fd is in YUV420M:

    input_params.payloadType = NvBufferPayload_SurfArray;
    input_params.width = ctx->cam_w;
    input_params.height = ctx->cam_h;
    input_params.layout = NvBufferLayout_Pitch;
    input_params.colorFormat = get_nvbuff_color_fmt(V4L2_PIX_FMT_YUV420M);
    input_params.nvbuf_tag = NvBufferTag_NONE;

    /* Create Render buffer */
    if (-1 == NvBufferCreateEx(&ctx->render_dmabuf_fd, &input_params))

Please modify it to RGBA and do conversion through CV_RGBA2BGR.

1 Like

Ok, so I needed to also change :

set_defaults(context_t * ctx)
{
    memset(ctx, 0, sizeof(context_t));

    ctx->cam_devname = "/dev/video0";
    ctx->cam_fd = -1;
    ctx->cam_pixfmt = V4L2_PIX_FMT_ABGR32; //V4L2_PIX_FMT_YUYV; <- changed hare
    ctx->cam_w = 640;
    ctx->cam_h = 480;
    ctx->frame = 0;
    ...

_

static nv_color_fmt nvcolor_fmt[] =
{
    /* TODO: add more pixel format mapping */
    {V4L2_PIX_FMT_UYVY, NvBufferColorFormat_UYVY},
    {V4L2_PIX_FMT_VYUY, NvBufferColorFormat_VYUY},
    {V4L2_PIX_FMT_YUYV, NvBufferColorFormat_YUYV},
    {V4L2_PIX_FMT_YVYU, NvBufferColorFormat_YVYU},
    {V4L2_PIX_FMT_GREY, NvBufferColorFormat_GRAY8},
    {V4L2_PIX_FMT_YUV420M, NvBufferColorFormat_YUV420},
    {V4L2_PIX_FMT_ABGR32, NvBufferColorFormat_ABGR32} // <- added that
};

And now it’s working but … for only one of may cameras (cam0 in the main post), if I try that with the second one a get Segmentation Fault (core dumped)

That camera has only available Pixel Format of “YUYV”, so I think that’s the problem
How could I modify my code to make that one work?

Hi,
This modification loos wrong:

    ctx->cam_pixfmt = V4L2_PIX_FMT_ABGR32; //V4L2_PIX_FMT_YUYV; <- changed hare

You don’t need to modify it since it can be set in command line:

        -f              Set output pixel format of video device (supports only YUYV/YVYU/UYVY/VYUY/GREY/MJPEG)

Yeah, that’s true … but that doesn’t solve my issue.
I still can’t receive frames from my second camera, even if I specify -f YUYV in command. I seem that conversions aren’t right for that format, are you sure that CV_8UC4 and COLOR_RGBA2BGR are correct for YUYV 4:2:2 as well?

If I uncomment renderer, the video shown by it is fine for a moment until OpenCV does not throw Segmentation fault.

Hi,
You need to create the render buffer in RGBA:

    input_params.payloadType = NvBufferPayload_SurfArray;
    input_params.width = ctx->cam_w;
    input_params.height = ctx->cam_h;
    input_params.layout = NvBufferLayout_Pitch;
-    input_params.colorFormat = get_nvbuff_color_fmt(V4L2_PIX_FMT_YUV420M);
+    input_params.colorFormat = NvBufferColorFormat_ABGR32;
    input_params.nvbuf_tag = NvBufferTag_NONE;

    /* Create Render buffer */
    if (-1 == NvBufferCreateEx(&ctx->render_dmabuf_fd, &input_params));

And map the buffer to cv::Mat(CV_8UC4):

+            cv::Mat imgbuf = cv::Mat(_HEIGHT_,
+                                     _WIDTH_,
+                                     CV_8UC4, pdata);

This should get cv::Mat in RGBA.

Ok, now both cameras are working but there’s an fps loss in the second one.
Here’s my cpp :
camera_v4l2_cuda_OpenCV.cpp (26.3 KB)
FPS counter show ~25 fps, while in reality it’s like ~10 or less :(
so loop speed doesn’t seem to be the case?

After adding cv::resize() to lower resolution (1280x720) it works perfectly, what’s the reason?
Can I somehow improve it to work as well in 1920x1080?

But that’s pretty ok.
I noticed that it’s not using GPU as much as before, I suspect that because cv::imshow is CPU oriented and the majority of that usage comes from the previous renderer?

Now about the next step, is there some easy way to add a second camera to that example?

Hi,
Please execute sudo nvpmodel -m 0 and sudo jetson_clocks to get max performance. OpenCV requires significant CPU usage and better to run CPUs at max clock. You can check system status by executing sudo tegrastats.

The sample demonstrates frame capture through v4l2. For opening second camera, you may duplicate the code to open the device node.