v4l2src NVMM

Hello,

I work with a monochrome sensor, hence I cannot work with nvcamerasrc.

How can I make v4l2src write directly into NVMM memory ?

Best regards

Hi,
For gstreamer, please refer to

$ gst-launch-1.0 v4l2src device=/dev/video1 ! 'video/x-raw,format=UYVY' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvoverlaysink

We also suggest you try tegra_multimedia_api samples:

tegra_multimedia_api\samples\12_camera_v4l2_cuda
tegra_multimedia_api\samples\v4l2cuda

My sensor actually produces RAW12 monochrome, and I have patched the vi2_video_formats table in kernel sources to let it produce T_L8 (V4L2_PIX_FMT_)GREY (gstreamer “GRAY8”) or T_R16_I (V4L2_PIX_FMT_)Y16_BE (gstreamer “GRAY16_BE”). So v4l2src really produces either “GRAY8” or “GRAY16_BE”.

None of the following works :

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,format=GRAY8' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=GRAY8' ! nvjpegenc ! fakesink
WARNING: erroneous pipeline: could not link nvvconv0 to nvjpegenc0
gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,format=GRAY16_BE' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=GRAY8' ! nvjpegenc ! fakesink
WARNING: erroneous pipeline: could not link v4l2src0 to nvvconv0

Although the following works, but slowly :

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw,format=GRAY8' ! nvjpegenc ! fakesink

tegra_multimedia_api\samples\12_camera_v4l2_cuda does not help, as there is no NvBufferColorFormat_GREY or NvBufferColorFormat_Y16_BE.

Although it does stupid things trying to convert my GRAY image with gpuConvertYUYVtoRGB and not using the bytesperline info of the buffers, tegra_multimedia_api\samples\v4l2cuda does work to read incoming frames directly into the memory allocated by CudaMallocManaged. Thank you.

If I integrate CudaMallocManaged buffer allocation into v4l2src, how can I advertise the memory allocated by CudaMallocManaged as (memory:NVMM) ? Which function must I use to map a memory address allocated by CudaMallocManaged into the address of a NVMM memory that I could give to nvjpegenc, nvvidconv, nvivafilter and omxh264enc ?

The input of nvjpegenc has to be I420. You should convert grey to I420 via nvvidconv.

I will actually allocate memory using CudaMallocManaged, but allocate the buffers to be 3/2 as large as my GRAY8 images to have space for the U and V half-size images, and I will fill at initialisation that part of my buffers with 0x80 (decimal 128) which is the neutral value for U and V components.

Which function must I use to map the memory address allocated by CudaMallocManaged into the address of a NVMM memory that I could give to nvjpegenc, nvvidconv, nvivafilter and omxh264enc ?

Your case is close to below post:
[url]https://devtalk.nvidia.com/default/topic/1027631/jetson-tx2/formatting-images-to-feed-into-nvvideoencoder-tegra-multimedia-api-/post/5227402/#5227402[/url]
For leveraging HW component, you have to allocate NvBuffer instead of CudaMallocManaged.

Is a NvBuffer some kind of superset of a memory that could be allocated by CudaMallocManaged ?
In other words, can I give some part (which one ?) of a NvBuffer to cuda and let it be used by cuda with the same performance as for memory allocated by CudaMallocManaged ?

Hi phdm,
Yes, performance is same. Below is demonstration code in 12_camera_v4l2_cuda

static bool
cuda_postprocess(context_t *ctx, int fd)
{
    if (ctx->enable_cuda)
    {
        // Create EGLImage from dmabuf fd
        ctx->egl_image = NvEGLImageFromFd(ctx->egl_display, fd);
        if (ctx->egl_image == NULL)
            ERROR_RETURN("Failed to map dmabuf fd (0x%X) to EGLImage",
                    ctx->render_dmabuf_fd);

        // Running algo process with EGLImage via GPU multi cores
        HandleEGLImage(&ctx->egl_image);

        // Destroy EGLImage
        NvDestroyEGLImage(ctx->egl_display, ctx->egl_image);
        ctx->egl_image = NULL;
    }

    return true;
}

Hi DaneLLL,

sorry for my slow reply, but actually 12_camera_v4l2_cuda does fail on my headless jetson-TX1 based carrier-board :

nvidia@cam5-0003:~/tegra_multimedia_api/samples/12_camera_v4l2_cuda$ ./camera_v4l2_cuda -f YUYV -s 1936x1105 -c
[ERROR] (NvEglRenderer.cpp:97) <renderer0> Error in opening display
[ERROR] (NvEglRenderer.cpp:152) <renderer0> Got ERROR closing display
ERROR: display_initialize(): (line:261) Failed to create EGL renderer
ERROR: init_components(): (line:286) Failed to initialize display
ERROR: main(): (line:530) Failed to initialize v4l2 components
nvbuf_utils: dmabuf_fd 0 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...
App run failed
nvidia@cam5-0003:~/tegra_multimedia_api/samples/12_camera_v4l2_cuda$

Could that be made to work without a display ?

Second question : “Is a NvBuffer the same thing as a ‘(memory:NVMM)’” ?

Please remove NvEglrender from the sample. we have samples for demonstrating HW functions and you have to integrate into your usecase.