Gstnvv4l2camerasrc with GRAY8 support

As I understand it, the nvidia camera plugin "nvv4l2camerasrc " currently only supports UYVY format. However the camera I am using has GRAY8 format. Is it possible for me to modify the source code of gstnvv4l2camerasrc to support this format? Or should I not even begin to try as it is not possible?

Hi,
It should be possible. We support GRAY in 12_camera_v4l2_cuda. Please check if you can run the sample to capture frames first, and port it to nvv4l2camerasrc plugin. The sample is in

/usr/src/jetson_multimedia_api/samples/12_camera_v4l2_cuda

Thank you for the info.

I am currently able to capture frames using the ‘v4l2src’ gstreamer plugin. However i am experiencing some performance issues resulting in frame loss. I have determined that the frame loss occurs at the beginning of my pipe, so basically at the source. I was hoping that by using the ‘nvv4l2camerasrc’ plugin i can make use of the NVMM buffers to have some performance gain, and also to be able to increase the queue buffer size used by the v4l2 device in the plugin. Do you think this is usefull?

Hi,
You may check if you can achieve target fps in running

gst-launch-1.0 v4l2src ! video/x-raw,format=GRAY8,width=_W_,height=_H_,framerate=_FR_ ! nvvidconv ! video/x-raw(memory:NVMM),format=I420 ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 -v

If you can get enough performance in the pipeline, may not need to customize nvv4l2camerasrc.

The customization eliminates one memory copy from CPU buffer to NVMM buffer. If you would like to reduce CPU loading, we suggest give it a try.

Thanks but it seems nvvidconv does not supprt GRAY8

When executing

gst-launch-1.0 v4l2src ! "video/x-raw,format=(string)GRAY8,width=(int)4208,height=(int)3120,framerate=(fraction)26/1" ! nvvidconv ! "video/x-raw(memory:NVMM),format=(string)NV12" ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 -v

I get the following error

Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0: sync = false
Setting pipeline to PLAYING …
New clock: GstSystemClock
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-raw, format=(string)GRAY8, width=(int)4208, height=(int)3120, framerate=(fraction)26/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, format=(string)GRAY8, width=(int)4208, height=(int)3120, framerate=(fraction)26/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)4208, height=(int)3120, framerate=(fraction)26/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)4208, height=(int)3120, framerate=(fraction)26/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = video/x-raw(memory:NVMM), width=(int)4208, height=(int)3120, framerate=(fraction)26/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstFakeSink:fakesink0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)4208, height=(int)3120, framerate=(fraction)26/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0.GstGhostPad:sink: caps = video/x-raw(memory:NVMM), width=(int)4208, height=(int)3120, framerate=(fraction)26/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)4208, height=(int)3120, framerate=(fraction)26/1, interlace-mode=(string)progressive, format=(string)NV12
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:sink: caps = video/x-raw, format=(string)GRAY8, width=(int)4208, height=(int)3120, framerate=(fraction)26/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, format=(string)GRAY8, width=(int)4208, height=(int)3120, framerate=(fraction)26/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
gst_nvvconv_transform: NvBufferTransform not supported
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason error (-5)
Execution ended after 0:00:00.141547135
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …

Hi,
I have corrected the pipeline. Please replace NV12 with I420 and try again.

Your source is 4208x3120 26fps. Since the resolution is above 4K, we would suggest customize nvv4l2camerasrc plugin.

With I420 I can get it to work ,but not at the expected 26fps but rather 21fps. I assume this is because of the format conversion happening from GRAY8 to I420?
I will have a look at customizing nvv4l2camerasrc.

Hi,

I have modified the plugin to support GRAY8 format. However i am running in to some strange performance issues. When i do

gst-launch-1.0 v4l2src ! "video/x-raw, format=(string)GRAY8, width=(int)4208, height=(int)3120, framerate=(fraction)26/1" ! fpsdisplaysink video-sink=fakesink text-overlay=false -e -v

I get 26fps as expected.

When i do

gst-launch-1.0 nvv4l2camerasrc bufapi-version=true ! "video/x-raw(memory:NVMM), format=(string)GRAY8, width=(int)4208, height=(int)3120, framerate=(fraction)26/1" ! fpsdisplaysink video-sink=fakesink text-overlay=false -e -v

I only get 2fps. Any idea what the issue might be?

Hi,
Are you able to try 12_camera_v4l2_cuda? See if you can achieve target frame rate in running the sample.

I have tried that sample, but i get the following image and only a frame rate of 10 fps.

Also i have an aditional question. According to the documentation the nvvidconv plugin should support conversion from GRAY8(NVMM) to I420(NVMM). However, when i use a pipe which does this conversion i always get

gst_nvvconv_transform: NvBufferTransform not supported

Can you please confirm if this should work or not.

Hi,
GRAY8 to I420 conversion should work. It is in 12_camera_v4l2_cudaby default:

                if (-1 == NvBufferTransform(ctx->g_buff[v4l2_buf.index].dmabuff_fd, ctx->render_dmabuf_fd,
                            &transParams))
                    ERROR_RETURN("Failed to convert the buffer");

                if (ctx->cam_pixfmt == V4L2_PIX_FMT_GREY) {
                    if(!nvbuff_do_clearchroma(ctx->render_dmabuf_fd))
                        ERROR_RETURN("Failed to clear chroma");
                }

And please call NvBufferGetParams() to get pitch,width,height. It looks like for resolution 4208x3120, the hardware buffer does not have identical pitch and width so the frame is not captured and put into the buffer correctly.

Does this mean DMA can not be used with v4l2 camera for this resolution?

In the code I also see

if (ctx->cam_pixfmt == V4L2_PIX_FMT_GREY &&
            params.pitch[0] != params.width[0])
                ctx->capture_dmabuf = false;

For the resoultion i use, i get

  • witdh = 4208
  • height = 3120
  • pitch = 4352
    So in my case it does not use DMA. Why is this the case?

Also i can get an image which looks ok, by manually modifying the width to 4224 in the Raw2NvBuffer call. So apparently the width has to be a multiple of 64? Why is this the case?
Anyway, this is without using DMA which is not usefull to me as this results in a too low framerate.
Why is this exactly?

I can get

Hi,
It is constraint of hardware DMA buffer. For 4208x3120, it has buffers in 4352x3120 and valid data is in 4208x3120. Does the source support like 3840x2160? If yes, would suggest try this resolution. It should allocate DMA buffer with pitch=width. If the source only supports 4208x3120, you would need to capture into CPU buffers and call Raw2NvBuffer.

I can modify the sensor driver to make any resolution which is lower than 4208x3120. However for our purposes we need the resolution to be as high as possible. Is 3840x2160 the highest resolution with the correct pitch? Could you clarify how you calculate the numbers to have the correct pitch?

Hi,
Pitch value is aligned to 256, so 320 → 512, 640 → 768, 4208 → 4352.

Thank you for your answer.

I was able to get it fixed even for 4208x3120 by setting the bytesperline in the VIDIOC_S_FMT of the v4l2 device to the 256 aligned value of 4352.

Hi rbi,
I have tried with 4192 * 3120 UYVY image. I am trying to set the VIDIOC_S_FMT of the v4l2 device bytesperline to 8448 (4192 2 +64 to make it aligned to 256). But when I read the configured parameter using VIDIOC_G_FMT it is not showing the configured value. It returns the bytesperline as 8384 (41922). Anything I am missing…? Any suggestion?