CSI RAW 10/12 bit truncating to 8 bit in TX2 hardware

I am using an IMX290 sensor, which supports 10/12 bit RAW bayer on a product with TX2 and JetPack 4.2.
We require the raw bayer data for downstream processing in a gstreamer pipeline. gstreamer does not currently handle more than 8 bit bayer data, which is fine for us.

We would like the hardware to perform the truncation of the 10 bit → 8 bit data. The way I see it the hardware should support this in two ways:

a) CSI → VI → ISP and using libargus
This pipeline works, but will already perform debayer. We need raw bayer. According to current libargus documentation (32.3.1 Release), raw bayer output is not yet supported, and planned for a future release.

b) CSI → VI → MC and using v4l2
This pipeline works for getting the raw data, and using e.g. v4l2-ctrl capture I can see the data is provided as 16 bit to memory with the least significant bits replicated. According to the TRM it should be possible to configure the VI to use T_L8 pixel format and truncate to 8-bit for further downstream processing by gstreamer.

My questions are:

  1. Is there any update on status libargus and raw bayer output?
  2. How can I configure VI to use T_L8 pixel format instead of the 16 bit format? I was looking for this in the device-tree, but couldn’t find anything there.
  1. Sorry to tell this feature still postpone for the developer resource.
  2. You can check the vi4_fops.c to get the detail information. You may need hard code for test for your case.

When I apply the following changes to the vi4_formats.h, the image that I get from v4l2-ctl capture is an 8-bit raw bayer image. This even allows me to use the sensor data in a gstreamer pipeline without the patches to handle 16-bit bayer data. This is on an TX2 Jetpack 4.3.1

diff --git a/drivers/media/platform/tegra/camera/vi/vi4_formats.h b/drivers/media/platform/tegra/camera/vi/vi4_formats.h
index de33c42..20e8237 100644
--- a/drivers/media/platform/tegra/camera/vi/vi4_formats.h
+++ b/drivers/media/platform/tegra/camera/vi/vi4_formats.h
@@ -87,6 +87,7 @@ static const struct tegra_video_format vi4_video_formats[] = {
        /* RAW 7: TODO */

        /* RAW 8 */
+       /*
        TEGRA_VIDEO_FORMAT(RAW8, 8, SRGGB8_1X8, 1, 1, T_L8,
                                RAW8, SRGGB8, "RGRG.. GBGB.."),
        TEGRA_VIDEO_FORMAT(RAW8, 8, SGRBG8_1X8, 1, 1, T_L8,
@@ -95,8 +96,10 @@ static const struct tegra_video_format vi4_video_formats[] = {
                                RAW8, SGBRG8, "GBGB.. RGRG.."),
        TEGRA_VIDEO_FORMAT(RAW8, 8, SBGGR8_1X8, 1, 1, T_L8,
                                RAW8, SBGGR8, "BGBG.. GRGR.."),
+                               */

        /* RAW 10 */
+       /*
        TEGRA_VIDEO_FORMAT(RAW10, 10, SRGGB10_1X10, 2, 1, T_R16_I,
                                RAW10, SRGGB10, "RGRG.. GBGB.."),
        TEGRA_VIDEO_FORMAT(RAW10, 10, SGRBG10_1X10, 2, 1, T_R16_I,
@@ -105,6 +108,15 @@ static const struct tegra_video_format vi4_video_formats[] = {
                                RAW10, SGBRG10, "GBGB.. RGRG.."),
        TEGRA_VIDEO_FORMAT(RAW10, 10, SBGGR10_1X10, 2, 1, T_R16_I,
                                RAW10, SBGGR10, "BGBG.. GRGR.."),
+                               */
+       TEGRA_VIDEO_FORMAT(RAW10, 10, SRGGB10_1X10, 1, 1, T_L8,
+                               RAW10, SRGGB8, "RGRG.. GBGB.."),
+       TEGRA_VIDEO_FORMAT(RAW10, 10, SGRBG10_1X10, 1, 1, T_L8,
+                               RAW10, SGRBG8, "GRGR.. BGBG.."),
+       TEGRA_VIDEO_FORMAT(RAW10, 10, SGBRG10_1X10, 1, 1, T_L8,
+                               RAW10, SGBRG8, "GBGB.. RGRG.."),
+       TEGRA_VIDEO_FORMAT(RAW10, 10, SBGGR10_1X10, 1, 1, T_L8,
+                               RAW10, SBGGR8, "BGBG.. GRGR.."),

        /* RAW 12 */
        TEGRA_VIDEO_FORMAT(RAW12, 12, SRGGB12_1X12, 2, 1, T_R16_I,

When I apply the same patch to vi2_formats.h to get this to work on TX1, I also get the wanted 8-bit memory layout, however it seems that in the conversion 10bit->8bit the pixels are truncated on the wrong end (e.g. 2 MSB are discarded instead of 2 LSB). I’ve looked at the TRM (v2.0) but could not find anything there so far.

Any ideas?

Ok, figured it out. There is a v4l2-ctl write_isp_format, this is default=0 on TX1 and default=1 on TX2. When set to 0 it will bypass PIXFMT, which results in the wrong truncation.