V4l2 dma-memory streaming problem on tegra-video device

platform: orin-AGX
JP: 6.2
L4T: 36.4

We have driver for our camera.
On JP 5.0.1 (L4T 35.1) video-streaming is OK,
via Argus (nvarguscamerasrc) and via nvv4l2 (nvv4l2camerasrc).

on L4T 36.4.x, Argus streaming is OK,
[gst-launch-1.0 nvarguscamerasrc ! fakesink]
and also v4l2 while using mmap memory
[v4l2-ctl --stream-mmap].
But, v4l2 with dma memory (which is used by nvv4l2camerasrc) is not working.
i.e. gst-launch-1.0 nvv4l2camerasrc ! fakesink
{
VIDIOC_QBUF is failed (error message “invalid dmabuf length” from videobuf2_common).
We tried patching the code and force larger size on NvBufSurfaceAllocateParams,
so the VIDIOC_QBUF stage pass ok,
but this end with crash on VIDIOC_DQBUF …
}

v4l2-compliance-out.txt (7.7 KB)

hello aviad.pedahel,

would like to confirm your sensor output formats.
please check with… $ v4l2-ctl -d /dev/video0 --list-formats-ext

attached -
list-formats-ext.txt (776 Bytes)
the gstreamer pipe works OK, and it transit to PLAYING state.
More ever, when the nvv4l2camerasrc is patched for using v4l2 mmap memory, video streaming is OK,
but this force using memcopy (by Raw2NvBufSurface())
which decrease the fps

hello aviad.pedahel,

it’s BA10, which is GRBG10 . this is a bayer camera sensor.
according to Camera Architecture Stack, you should use libargus, or nvarguscamerasrc plugin for using Jetson ISP.

As I wrote above, we are using nvarguscamerasrc and it works OK.
But, we also need the bayer samples, and for this we use nvv4l2camerasrc.
At L4T 35.1 this works fine …

hello aviad.pedahel,

may I have more details about your real use-case to obtain the raw directly?

We use sensor that has versions that could output 10 bit grayscale,
so ISP is not really needed (debayer, color correction …),
This format is not supported on tegra infra,so we must obtain the raw samples directly.

hello aviad.pedahel,

may I also know what’s the patch you’ve applied?

v4l2mmap_diff.txt (4.7 KB)

however… gstnvv4l2greysrc.c is not even the sources maintain by NV repository.

… gstnvv4l2greysrc.c is just rename of gstnvcamerasrc.cpp.

For the clarity,
I re-explain our problem,
and details the steps were done,
on “clean sheet”
Our OS is orin-AGX, L4T ver=36.4.3

nvidia-lt-gstreamer package is installed
[nvidia-l4t-gstreamer/stable,now 36.4.3-20250107174145]

So, we have formal nvarguscamerasrc and nvv4l2camerasrc.

  1. While nvarguscamerasrc is used, there is no problem
  2. While nvv4l2camerasrc is used, the pipe is freeze - no streaming …
  3. We clone the code for nvv4l2camerasrc plugin, from nVidia repo, and take the latest tag:
    $ git remote -v
    origin https://nv-tegra.nvidia.com/r/tegra/gst-src/gst-nvv4l2camera.git (fetch)
    origin https://nv-tegra.nvidia.com/r/tegra/gst-src/gst-nvv4l2camera.git (push)
    $git status
    HEAD detached at jetson_36.4.3
  4. We patch the code for adding v4l2-mmap-memory option
    I attached patch file, and whole sources as tar.

While mmap is used - we got streaming from nvv4l2camerasrc patched plugin.
[v4l2-memory type is set on line #60 of nvv4l2camerasrc.cpp ]

** For testing streaming we use basic pipe:
A.
gst-launch-1.0 nvarguscamerasrc ! fpsdisplaysink text-overlay=false video-sink=fakesink -v
B.
gst-launch-1.0 nvv4l2camerasrc ! fpsdisplaysink text-overlay=false video-sink=fakesink -v

I hope that this help
thanks
patched_nvv4l2camerasrc.zip (610.2 KB)

Hi,
The nvv4l2camerasrc plugin supports capturing UYVY frame data into NVMM buffer directly. You should not need to patch it. You may try with the default plugin and see if you still observe the issue. May also try the jetson_multimedia_api sample:

/usr/src/jetson_multimedia_api/samples/12_v4l2_camera_cuda/

This is exactly our problem.
While at previous version (5.02)
nvv4l2camerasrc & 12_v4l2_camera_cuda example,
works properly,
at current release - 36.4.x
they aren’t work.
[Both are using same method - 4l2-dma memory …]

hello aviad.pedahel,

if this is a regression, did you also tested with previous JP-6 release versions for verification?

36.4.0 / 36.4.3
Same behavior

hello aviad.pedahel,

I’ve check v4l2_camera_cuda which able to run on developer kit.
may I know what’s the error reports, could you please also share the logs, such as, $ dmesg --follow for the instant kernel messages.

I have run the v4l2_camera_cuda, and (as is expected),
same behavior as was described in first message.

If a large enough size is not forced,
the software fails immediately at the VIDIOC_QBUF stage.
[“ERROR: request_camera_buff(): (line:365) Failed to enqueue buffers: Bad address (14)”]
[It seems the driver is requesting 524544 bytess over the pixels amount]

When a sufficient large enough size is forced , the kernel crash at the VIDIOC_DQBUF stage.
I attached the output of dmesg --follow
dmesg.txt (1.2 KB)

Anyway,

Our solution was to leave the v4l2 path,and work only via Argus.
Based on the nvarguscamerasrc sources,
we implement element that
pulls the camera bayer samples, (by applying setPixelFormat(PIXEL_FMT_RAW16) to the output stream)

Works fine …