Trouble having v4l2 auto select correct mode

Hi,
I have developed a driver for a camera that supports 3 modes, GREY, Y14, and Y16. The camera outputs in 1680x1050 @ 90,45, or 30. The issue is when I have use_sensor_mode_id = false and I use a command like v4l2-ctl -d /dev/video1 --set-fmt-video=width=1680,height=1050,pixelformat='Y14 ' --stream-mmap --stream-to=frame.bin it is unable to select the correct Y14 mode, it always selects mode 0.
If I use use_sensor_mode_id = true and add --set-ctrl=sensor_mode=1 then it works as expected. But my understanding from Sensor Software Driver Programming — Jetson Linux<br/>Developer Guide 34.1 documentation when use_sensor_mode_id If false, it uses default mode selection logic, which selects a mode based on resolution, color format, and frame rate.
Here are some of my relevant code snippets:

You can see the modes defined in my camera_common_frmfmt table.

+static const struct camera_common_frmfmt dominite_frmfmt[] = {
+	//{size,      framerates,   num_framerates, hdr_en, mode}
+	{{1680,1050}, dominite_m1_fps, 3,              0,      DOMINITE_MODE_1680X1050_08_2L},
+	{{1680,1050}, dominite_m1_fps, 3,              0,      DOMINITE_MODE_1680X1050_14_4L},
+	{{1680,1050}, dominite_m1_fps, 3,              0,      DOMINITE_MODE_1680X1050_16_4L},

In the device tree there are 3 modes defined

mode0 { 
....
+                    active_w = "1680";
+                    active_h = "1050";
+                    mode_type = "raw";
+                    pixel_phase = "y";
+                    csi_pixel_bit_depth = "8";
...
}
mode1 { 
...
+                    active_w = "1680";
+                    active_h = "1050";
+                    dynamic_pixel_bit_depth="16";
+                    mode_type="gray14";
+                    pixel_phase = "y";
+                    csi_pixel_bit_depth = "14";
...
}
mode2 { 
...
+                    active_w = "1680";
+                    active_h = "1050";
+                    dynamic_pixel_bit_depth="16";
+                    mode_type="gray16";
+                    pixel_phase = "y";
+                    csi_pixel_bit_depth = "16";
...
}

and listing formats in v4l2 gives

v4l2-ctl -d 1 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Type: Video Capture

	[0]: 'GREY' (8-bit Greyscale)
		Size: Discrete 1680x1050
			Interval: Discrete 0.011s (90.000 fps)
			Interval: Discrete 0.022s (45.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1680x1050
			Interval: Discrete 0.011s (90.000 fps)
			Interval: Discrete 0.022s (45.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1680x1050
			Interval: Discrete 0.011s (90.000 fps)
			Interval: Discrete 0.022s (45.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
	[1]: 'Y14 ' (14-bit Greyscale)
		Size: Discrete 1680x1050
			Interval: Discrete 0.011s (90.000 fps)
			Interval: Discrete 0.022s (45.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1680x1050
			Interval: Discrete 0.011s (90.000 fps)
			Interval: Discrete 0.022s (45.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1680x1050
			Interval: Discrete 0.011s (90.000 fps)
			Interval: Discrete 0.022s (45.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
	[2]: 'Y16 ' (16-bit Greyscale)
		Size: Discrete 1680x1050
			Interval: Discrete 0.011s (90.000 fps)
			Interval: Discrete 0.022s (45.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1680x1050
			Interval: Discrete 0.011s (90.000 fps)
			Interval: Discrete 0.022s (45.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1680x1050
			Interval: Discrete 0.011s (90.000 fps)
			Interval: Discrete 0.022s (45.000 fps)
			Interval: Discrete 0.033s (30.000 fps)

I did some digging and looking at camera_common.c:camera_common_try_fmt(...) it looks like it only checks if resolution is correct

...
if (mf->width == frmfmt[i].size.width &&
				mf->height == frmfmt[i].size.height) {
...
				s_data->mode_prop_idx = i;
...
}

to select the mode. Should it also check pixel format? How is pixel format suppose to be set? Is there an additional camera driver function needed?
Thank you!

Hi,

For the camera basic functionality first needs to check the device and driver configuration.
You can reference to below program guide for the detailed information of device tree and driver implementation.
https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/SD/CameraDevelopment/SensorSoftwareDriverProgramming.html?highlight=programing#sensor-software-driver-programming

Please refer to Applications Using V4L2 IOCTL Directly by using V4L2 IOCTL to verify basic camera functionality.
https://docs.nvidia.com/jetson/archives/r36.3/DeveloperGuide/SD/CameraDevelopment/SensorSoftwareDriverProgramming.html?highlight=programing#to-run-a-v4l2-ctl-test

Once confirm the configure and still failed below link help to get log and some information and some tips for debug.
https://elinux.org/Jetson/l4t/Camera_BringUp#Steps_to_enable_more_debug_messages

Thanks!

Hi,
As stated in the above message I mention that the camera works fine if I manually set -set-ctrl=sensor_mode=1 when use_sensor_mode_id = true case. So the camera in general works fine. I see when use_sensor_mode_id = false , in my .set_mode function we are always choosing mode 0. The issue is during your default mode selection, which i showcased in the below code snippet from camera_common.c:camera_common_try_fmt(...).

It’s known behavior. The use_sensor_mode_id use for reporting the same output size but different frame rate.

Hi Shane,
Per the documentation,

So should it also consider color format? Is this a bug? If so is it somewhere in the nvidia side (camera core) or on the v4l2/ my own driver? Is there a way to have it select correct mode based on pixel format that I am missing? This seems to work fine for UVC cameras but not for csi cameras.
Thank you for your help.

Yes, it could be a defect of AGX CSI camera framework.
That’s why implement the use_sensor_mode_id property for the case.

Hi Shane,
Can you explain how the table that is shown when calling v4l2-ctl -d 1 --list-formats-ext is built? I dont understand why it shows the same resolution 3 times. It seems like it applies a cross product of all modes defined in the DT and the modes defined in static const struct camera_common_frmfmt dominite_frmfmt[]

Current design don’t consider multiple pixel format and have problem to emulate the sensor mode if report multiple pixel format. I don’t recall exactly where the code logic. You can trace the camera_command.c or relative code in the …/kernel/nvidia/drivers/media/platform/tegra/camera/*

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.