l4t kernel bug/issue handling sensor pixel format

I came across an annoying kernel bug/issue handling the sensor modes pixelformats.

The pixel format of a sensor is defined in the “pixel_t” property of the mode specification in the device tree. The format defined there is parsed by the “extract_pixel_format” function in sensor_common.c. Most (if not all) sensor dtsi files of the l4t kernel specify “bayer_rggb” for the “pixel_t” property but this pixel format is not one of the ones that “extract_pixel_format” checks for:

static int extract_pixel_format(
	const char *pixel_t, u32 *format)
{
	size_t size = strnlen(pixel_t, OF_MAX_STR_LEN);

	if (strncmp(pixel_t, "bayer_bggr10", size) == 0)
		*format = V4L2_PIX_FMT_SBGGR10;
	else if (strncmp(pixel_t, "bayer_rggb10", size) == 0)
		*format = V4L2_PIX_FMT_SRGGB10;
	else if (strncmp(pixel_t, "bayer_grbg10", size) == 0)
		*format = V4L2_PIX_FMT_SGRBG10;
	else if (strncmp(pixel_t, "bayer_bggr12", size) == 0)
		*format = V4L2_PIX_FMT_SBGGR12;
	else if (strncmp(pixel_t, "bayer_rggb12", size) == 0)
		*format = V4L2_PIX_FMT_SRGGB12;
	else if (strncmp(pixel_t, "bayer_wdr_pwl_rggb12", size) == 0)
		*format = V4L2_PIX_FMT_SRGGB12;
	else if (strncmp(pixel_t, "bayer_wdr_dol_rggb10", size) == 0)
		*format = V4L2_PIX_FMT_SRGGB10;
	else if (strncmp(pixel_t, "bayer_xbggr10p", size) == 0)
		*format = V4L2_PIX_FMT_XBGGR10P;
	else if (strncmp(pixel_t, "bayer_xrggb10p", size) == 0)
		*format = V4L2_PIX_FMT_XRGGB10P;
	else {
		pr_err("%s: Need to extend format%s\n", __func__, pixel_t);
		return -EINVAL;
	}

	return 0;
}

It is only for the reason that the function uses “strncmp” to compare the strings that the “bayer_rggb” string most drivers use, the string matches the first occurrence in the function. This is the 10 bit version of the pixel format, unless a future version of this function happens to have a different order of strings. With a 12 bit sensor this returns an incorrect pixel format, leading to an error in the capture process.

In my opinion, the “bayer_rggb” string for pixel_t should either be rejected in the “extract_pixel_format” function or it should be explicitly documented as a 10 bit format and handled as such. Right now, if you specify “bayer_rggb” for a 12 bit sensor, it will just silently fail without a real clue of what is going on.

Have a check below document for the DT configure. Some of those properties may not used by camera_common.c but for the argus only.

[url]https://docs.nvidia.com/jetson/l4t/Tegra%20Linux%20Driver%20Package%20Development%20Guide/camera_sensor_prog.html#wwpID0E0U60HA[/url]

I also reviewed the code and its looks pixel_t is deprecated in favor pixel_phase,mode_type and csi_pixel_bit_depth but none of the dts files use this approach.

@jas-mx:

Indeed but from the code, pixel_t also overrides the alternatives. I think this is quite odd for a deprecated property.