l4t 28.2.1 nvcamerasrc vs v4l2-ctl --stream-mmap --stream-to= vs v4l2src

Hi nvidia,

I have a imx264 monochrome sensor which sends Y12 pixels.

I have added the following entries to vi2_video_formats :

TEGRA_VIDEO_FORMAT(RAW12, 12, Y12_1X12, 2, 1, T_R16_I,
                               RAW12, Y16_BE, "GRAY16_BE"),
       TEGRA_VIDEO_FORMAT(RAW12, 12, Y12_1X12, 1, 1, T_L8,
                               RAW12, GREY, "GRAY8"),

I can acquire the images from this sensor using different commands :

  • a gstreamer pipeline starting with nvcamerasrc, which of course does debayering on a not-bayer input, but still produces an acceptable image with some details lost compared to the Y12 source

  • v4l2-ctl --stream-mmap --stream-to=

  • a gstreamer pipeline starting with v4l2src

The problem I have is that the resulting framerates are very different and disappointing. With nvcamerasrc, I obtain about 20 fps, with v4l2-ctl, I get about 7 fps, but with v4l2src I get only about 2 fps. All the tests are made on the same camera, without changing the imx264 driver. For the v4l2src method, I tried the ‘io-mode=0’, ‘io-mode=2’ and 'io-mode=4" property, without any framerate difference.

What can I do to speed up the v4l2src-based pipeline ?

hello phdm,

you may also refer to Topic 1036708 for some useful information.
thanks

When I use v4l2src, I get about 2 fps, and kernel says :

[  102.446162] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  105.178377] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  107.686051] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  110.678096] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  113.770608] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  116.802538] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  120.190636] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  123.590124] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  127.062676] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  130.586798] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  134.058371] video4linux video0: MW_ACK_DONE syncpoint time out!0

Those messages do not happen when using nvcamerasrc, getting 20 fps, with exactly the same kernel and driver.

When I use ‘v4l2-ctl -d /dev/video0 --set-fmt-video=width=1936,height=1080,pixelformat=RG12 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=20 --stream-to=test.raw’, I get 7 fps, and also the same messages, but at a faster pace.

[  648.066709] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  648.198663] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  648.330852] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  648.466733] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  648.598706] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  648.730706] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  648.777255] vi 54080000.vi: tegra_channel_error_status:error 4000 frame 96
[  648.898695] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  648.930675] video4linux video0: MW_ACK_DONE syncpoint time out!0
[  649.016532] video4linux video0: MW_ACK_DONE syncpoint time out!0

hello phdm,

there’s low-level failure from kernel side, you must solve the syncpt timeout failure first.

video4linux video0: MW_ACK_DONE syncpoint time out!0

Tegra low level driver is using sync point to capture the MIPI streaming, MW_ACK_DONE syncpt timeout means it cannot get the signal of frame end.
could you please review the device tree properties settings, they should be identical with register settings.
you could also refer to https://elinux.org/Jetson_TX2/28.1_Camera_BringUp for some debug tips.
thanks

Thanks JerryChang,

but I still do not understand why this message “video4linux video0: MW_ACK_DONE syncpoint time out!0” happens only when using v4l2src, and does not happen when using nvcamerasrc.

I also noticed that “line_length” is completely unused by the kernel in l4t 28.2.1. ‘sensor_common_parse_image_props’ merely reads it, but no other function uses the value.

hello phdm,

  1. there are different path for v4l2src and nvcamerasrc.
    please check the Camera Architecture Stack via [NVIDIA Tegra Linux Driver Package]-> [Development Guide]-> [Camera Development] for more details.

  2. you’re correct, kernel driver did not use line_length property.
    this is being used for sensor readout time calculation, which nvcamerasrc is using that.

  3. I think you should use v4l2src since you don’t need nvcamerasrc to perform debayering for your monochrome sensors.

Well, this is now fixed but it took some trials and errors, For the vendore documented 1920x1080 mode, I had to set the height in the driver mode and the “active_h” field in the dts mode to 1105 (instead of 1080 thus), and for the 2448x2848 mode, I had to set those values to 2065 (instead of 2048).

That fixed the “MW_ACK_DONE syncpt timeout” error.

For the speed however, problems remain :

If I use

v4l2-ctl -d /dev/video$ID --set-fmt-video=height=2065,width=2464,pixelformat=GREY8 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1000

I get 30 fps.

If I use

cd /run/user/1001
gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,format=GRAY8,width=2464,height=2065 ! nvjpegenc ! multifilesink max-files=5

The framerate drops to 3 fps

Is there a way to get a higher framerate with v4l2src to feed a cuda plugin or nvjpegenc ?

hello phdm,

let’s narrow down the issues.
could you please access the camera via v4l2src and save the video streaming into *.mp4 files to check the frame rate.
for example,

gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 ! 'video/x-raw(memory:NVMM), width=1280, height=720, framerate=30/1' ! nvtee ! omxh264enc bitrate=20000000 ! qtmux ! filesink location=video.mp4

moreover,
had you tried to boost the system performance for your use-case?
please refer to [Developer Guide]-> [Clock Frequency and Power Management]-> [Power Management for TX2/TX2i Devices]-> [Maximizing Jetson TX2 Performance] for more details.
thanks

Hi JerryChang,

I have tried your suggestion and narrowed it down to a simpler case as my sensor does not produce I420, but GRAY8 or GRAY16_BE, but none works. ‘gst-inspect-1.0 v4l2src’ shows that v4l2src cannot produce ‘video/x-raw(memory:NVMM)’, but only ‘video/x-raw’. Also omxh264enc does not accept GRAY8 on input, contrary to Freescale’s imx6 ‘imxvpuenc_h264’

nvidia@cam5-0002:/run/user/1001$ gst-launch-1.0 -v v4l2src device=/dev/video1 num-buffers=300 ! 'video/x-raw(memory:NVMM), width=2464, height=2065, framerate=30/1' ! nvtee ! omxh264enc bitrate=20000000 ! qtmux ! filesink location=video.mp4
WARNING: erroneous pipeline: could not link v4l2src0 to nvtee0
nvidia@cam5-0002:/run/user/1001$ gst-launch-1.0 -v v4l2src device=/dev/video1 num-buffers=300 ! 'video/x-raw(memory:NVMM), width=2464, height=2065' ! nvjpegenc ! multifilesink max-files=5
WARNING: erroneous pipeline: could not link v4l2src0 to nvjpegenc0
nvidia@cam5-0002:/run/user/1001$

Tnank you for the link but I work currently with a TX1