Hi nvidia,
I have a imx264 monochrome sensor which sends Y12 pixels.
I have added the following entries to vi2_video_formats :
TEGRA_VIDEO_FORMAT(RAW12, 12, Y12_1X12, 2, 1, T_R16_I,
RAW12, Y16_BE, "GRAY16_BE"),
TEGRA_VIDEO_FORMAT(RAW12, 12, Y12_1X12, 1, 1, T_L8,
RAW12, GREY, "GRAY8"),
I can acquire the images from this sensor using different commands :
-
a gstreamer pipeline starting with nvcamerasrc, which of course does debayering on a not-bayer input, but still produces an acceptable image with some details lost compared to the Y12 source
-
v4l2-ctl --stream-mmap --stream-to=
-
a gstreamer pipeline starting with v4l2src
The problem I have is that the resulting framerates are very different and disappointing. With nvcamerasrc, I obtain about 20 fps, with v4l2-ctl, I get about 7 fps, but with v4l2src I get only about 2 fps. All the tests are made on the same camera, without changing the imx264 driver. For the v4l2src method, I tried the ‘io-mode=0’, ‘io-mode=2’ and 'io-mode=4" property, without any framerate difference.
What can I do to speed up the v4l2src-based pipeline ?