GMSL sensor display color is not correct (format: yuyv and uyvy)?

Hi NV_Team,

We get a camera sensor output format: YUYV (YUV422_8Bit). But the output color is not correct using gst-lanuch to display? How does this happen ?

  1. When set yuyv format in dtsi, v4l2-ctl output file is uyvy.
  2. When set uyvy format in dtsi, v4l2-ctl output file is yuyv.
    Below is the command we use and the output stream yuv file.

DTSI config as UYVY
dynamic_pixel_bit_depth = “8”;
csi_pixel_bit_depth = “8”;
mode_type = “yuv”;
pixel_phase = “uyvy”;

nvidia@orin:~$ v4l2-ctl -d /dev/video4 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Type: Video Capture

	[0]: 'UYVY' (UYVY 4:2:2)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
	[1]: 'UYVY' (UYVY 4:2:2)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)

nvidia@orin:~$ v4l2-ctl -V --set-fmt-video=width=1920,height=1080 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=2M_uyvy_video.yuv -d /dev/video4
Format Video Capture:
	Width/Height      : 1920/1080
	Pixel Format      : 'UYVY' (UYVY 4:2:2)
	Field             : None
	Bytes per Line    : 3840
	Size Image        : 4147200
	Colorspace        : sRGB
	Transfer Function : Default (maps to sRGB)
	YCbCr/HSV Encoding: Default (maps to ITU-R 601)
	Quantization      : Default (maps to Limited Range)
	Flags             : 
<

2M_uyvy_video.yuv (4.0 MB)

Hi NV_Team,

DTSI config as YUYV
dynamic_pixel_bit_depth = “8”;
csi_pixel_bit_depth = “8”;
mode_type = “yuv”;
pixel_phase = “yuyv”;

nvidia@orin:~$ v4l2-ctl -d /dev/video4 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Type: Video Capture

	[0]: 'YUYV' (YUYV 4:2:2)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
	[1]: 'YUYV' (YUYV 4:2:2)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)

nvidia@orin:~$ v4l2-ctl -V --set-fmt-video=width=1920,height=1080 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=2M_yuyv_video.yuv -d /dev/video4
Format Video Capture:
	Width/Height      : 1920/1080
	Pixel Format      : 'YUYV' (YUYV 4:2:2)
	Field             : None
	Bytes per Line    : 3840
	Size Image        : 4147200
	Colorspace        : sRGB
	Transfer Function : Default (maps to sRGB)
	YCbCr/HSV Encoding: Default (maps to ITU-R 601)
	Quantization      : Default (maps to Limited Range)
	Flags             : 
<

2M_yuyv_video.yuv (4.0 MB)

hello dennis.jiang,

could you please have a quick try by using 10-bit to receive sensor data?

Hi JerryChang,

I have change depth from “8” to “10”, the display color is still not correct.

dynamic_pixel_bit_depth = "8";
csi_pixel_bit_depth = "8";
mode_type = "yuv";
pixel_phase = "yuyv";

To

dynamic_pixel_bit_depth = "10";
csi_pixel_bit_depth = "10";
mode_type = "yuv";
pixel_phase = "yuyv";
nvidia@orin:~$ v4l2-ctl -d /dev/video4 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Type: Video Capture

	[0]: 'YUYV' (YUYV 4:2:2)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
	[1]: 'YUYV' (YUYV 4:2:2)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)

2M_yuyv_video.yuv (4.0 MB)

hello dennis.jiang,

according to VI driver, it’s by default support YUV with 16-bit formats.
for your use-case, please extend the pixel formats to report correct v4l2 fourcc to low level driver side.
for example,
$TOP/kernel_src/kernel/nvidia/drivers/media/platform/tegra/camera/sensor_common.c

static int extract_pixel_format(
        const char *pixel_t, u32 *format)
{
...
        else if (strncmp(pixel_t, "yuv_yuyv16", size) == 0)
                *format = V4L2_PIX_FMT_YUYV;
        else if (strncmp(pixel_t, "yuv_yvyu16", size) == 0)
                *format = V4L2_PIX_FMT_YVYU;
        else if (strncmp(pixel_t, "yuv_uyvy16", size) == 0)
                *format = V4L2_PIX_FMT_UYVY;
        else if (strncmp(pixel_t, "yuv_vyuy16", size) == 0)
                *format = V4L2_PIX_FMT_VYUY;

Hi JerryChang,

I have already added “yuv_uyvy16” to sensor_common.c and ‘MEDIA_BUS_FMT_YUYV8_1X16’ to camera_common.c, please refer to the attached files.

mode0 {
	mclk_khz = "24000";
	num_lanes = "4";
	tegra_sinterface = "serial_c";
	vc_id = "0";
	
	discontinuous_clk = "no";
	dpcm_enable = "false";
	cil_settletime = "0";
	dynamic_pixel_bit_depth = "16";
	csi_pixel_bit_depth = "16";
	mode_type = "yuv";
	pixel_phase = "yuyv";

	active_w = "1920";
	active_h = "1080";
	readout_orientation = "0";
	line_length = "2200";
	inherent_gain = "1";
	
	pix_clk_hz = "74250000";
	serdes_pix_clk_hz = "600000000";

	gain_factor = "10";
	min_gain_val = "0"; /* 0dB */
	max_gain_val = "480"; /* 48dB */
	step_gain_val = "3"; /* 0.3 */
	default_gain = "0";

	min_hdr_ratio = "1";
	max_hdr_ratio = "1";

	framerate_factor = "1000000";
	min_framerate = "30000000"; /* 1.5 */
	max_framerate = "30000000"; /* 30 */
	step_framerate = "1";
	default_framerate= "30000000";

	exposure_factor = "1000000";
	min_exp_time = "30"; /* us */
	max_exp_time = "660000"; /* us */
	step_exp_time = "1";
	default_exp_time = "33334";/* us */

	embedded_metadata_height = "0";
};

- entity 25: fzcama 30-0028 (1 pad, 1 link)
             type V4L2 subdev subtype Sensor flags 0
             device node name /dev/v4l-subdev8
	pad0: Source
		[fmt:YUYV8_1X16/1920x1080 field:none colorspace:srgb]
		-> "13e40000.host1x:nvcsi@15a00000-":0 [ENABLED]

- entity 27: vi-output, fzcama 30-0028 (1 pad, 1 link)
             type Node subtype V4L flags 0
             device node name /dev/video0
	pad0: Sink
		<- "13e40000.host1x:nvcsi@15a00000-":1 [ENABLED]

- entity 53: fzcama 30-002a (1 pad, 1 link)
             type V4L2 subdev subtype Sensor flags 0
             device node name /dev/v4l-subdev9
	pad0: Source
		[fmt:YUYV8_1X16/1920x1080 field:none colorspace:srgb]
		-> "13e40000.host1x:nvcsi@15a00000-":0 [ENABLED]

- entity 55: vi-output, fzcama 30-002a (1 pad, 1 link)
             type Node subtype V4L flags 0
             device node name /dev/video1
	pad0: Sink
		<- "13e40000.host1x:nvcsi@15a00000-":1 [ENABLED]

sensor_common.c (22.1 KB)

Hi,

camera_common.c (28.9 KB)

hello dennis.jiang,

please confirm the sensor output formats, device tree properties must identical with those hardware specific settings.

Hi JerryChang,

I have confirmed with sensor vendor sensor output format is YUYV (YUV422_8Bit).
Devicetree is set to YUV/YUYV yet.

Thanks.

hello dennis.jiang,

may I know which JetPack release you’re working with, thanks

Hi Jerry,

Jetpack5.0DP + Orin Devkit.

Hi Jerry,

I change dtsi and set format to yuyv16, then the camera can’t be opened.
MEDIA_BUS_FMT_YUYV8_2X8 and MEDIA_BUS_FMT_YUYV8_1X16, what the difference between these formats?
And in the extract_pixel_format YUYV8_2X8 and YUYV8_1X16 both stand for V4L2_PIX_FMT_YUYV.

dmesg log :

[   77.250471] start_addr=(0x20000), end_addr=(0x40000), buffer_size=(0x20000), smp_number_max=(16384)
[   78.307280] tegra-camrtc-capture-vi tegra-capture-vi: uncorr_err: request timed out after 2500 ms
[   78.307576] tegra-camrtc-capture-vi tegra-capture-vi: err_rec: attempting to reset the capture channel
[   78.308464] (NULL device *): vi_capture_control_message: NULL VI channel received
[   78.308678] t194-nvcsi 13e40000.host1x:nvcsi@15a00000: csi5_stream_close: Error in closing stream_id=5, csi_port=6
[   78.308986] (NULL device *): vi_capture_control_message: NULL VI channel received
[   78.309213] t194-nvcsi 13e40000.host1x:nvcsi@15a00000: csi5_stream_open: VI channel not found for stream- 5 vc- 0
[   78.309680] tegra-camrtc-capture-vi tegra-capture-vi: err_rec: successfully reset the capture channel
[   80.867891] tegra-camrtc-capture-vi tegra-capture-vi: uncorr_err: request timed out after 2500 ms
[   80.868169] tegra-camrtc-capture-vi tegra-capture-vi: err_rec: attempting to reset the capture channel
[   80.869469] (NULL device *): vi_capture_control_message: NULL VI channel received
[   80.869677] t194-nvcsi 13e40000.host1x:nvcsi@15a00000: csi5_stream_close: Error in closing stream_id=5, csi_port=6
[   80.869963] (NULL device *): vi_capture_control_message: NULL VI channel received
[   80.870174] t194-nvcsi 13e40000.host1x:nvcsi@15a00000: csi5_stream_open: VI channel not found for stream- 5 vc- 0
[   80.870669] tegra-camrtc-capture-vi tegra-capture-vi: err_rec: successfully reset the capture channel
[   83.428449] tegra-camrtc-capture-vi tegra-capture-vi: uncorr_err: request timed out after 2500 ms
[   83.428731] tegra-camrtc-capture-vi tegra-capture-vi: err_rec: attempting to reset the capture channel
[   83.429463] (NULL device *): vi_capture_control_message: NULL VI channel received
[   83.429675] t194-nvcsi 13e40000.host1x:nvcsi@15a00000: csi5_stream_close: Error in closing stream_id=5, csi_port=6
[   83.429972] (NULL device *): vi_capture_control_message: NULL VI channel received
[   83.430182] t194-nvcsi 13e40000.host1x:nvcsi@15a00000: csi5_stream_open: VI channel not found for stream- 5 vc- 0
[   83.430654] tegra-camrtc-capture-vi tegra-capture-vi: err_rec: successfully reset the capture channel
[   83.437243] bwmgr API not supported
[  120.242967] start_addr=(0x20000), end_addr=(0x40000), buffer_size=(0x20000), smp_number_max=(16384)

mode0 {
	mclk_khz = "24000";
	num_lanes = "4";
	tegra_sinterface = "serial_a";
	vc_id = "1";
	
	discontinuous_clk = "no";
	dpcm_enable = "false";
	cil_settletime = "0";
	dynamic_pixel_bit_depth = "16";
	csi_pixel_bit_depth = "16";
	mode_type = "yuv";
	pixel_phase = "yuyv";

	active_w = "1920";
	active_h = "1080";
	readout_orientation = "0";
	line_length = "2200";
	inherent_gain = "1";
	
	pix_clk_hz = "74250000";
	serdes_pix_clk_hz = "600000000";

	gain_factor = "10";
	min_gain_val = "0"; /* 0dB */
	max_gain_val = "480"; /* 48dB */
	step_gain_val = "3"; /* 0.3 */
	default_gain = "0";

	min_hdr_ratio = "1";
	max_hdr_ratio = "1";

	framerate_factor = "1000000";
	min_framerate = "30000000"; /* 1.5 */
	max_framerate = "30000000"; /* 30 */
	step_framerate = "1";
	default_framerate= "30000000";

	exposure_factor = "1000000";
	min_exp_time = "30"; /* us */
	max_exp_time = "660000"; /* us */
	step_exp_time = "1";
	default_exp_time = "33334";/* us */

	embedded_metadata_height = "0";
};

 $TOP/kernel_src/kernel/nvidia/drivers/media/platform/tegra/camera/camera_common.c
static const struct camera_common_colorfmt camera_common_color_fmts[] = {
	{
		MEDIA_BUS_FMT_SRGGB12_1X12,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SRGGB12,
	},
	{
		MEDIA_BUS_FMT_SGRBG12_1X12,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SGRBG12,
	},
	{
		MEDIA_BUS_FMT_SGBRG12_1X12,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SGBRG12
	},
	{
		MEDIA_BUS_FMT_SRGGB10_1X10,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SRGGB10,
	},
	{
		MEDIA_BUS_FMT_SGRBG10_1X10,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SGRBG10,
	},
	{
		MEDIA_BUS_FMT_SGBRG10_1X10,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SGBRG10,
	},
	{
		MEDIA_BUS_FMT_SBGGR10_1X10,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SBGGR10,
	},
	{
		MEDIA_BUS_FMT_SBGGR8_1X8,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SBGGR8,
	},
	{
		MEDIA_BUS_FMT_SRGGB8_1X8,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SRGGB8,
	},
	{
		MEDIA_BUS_FMT_YUYV8_1X16,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_YUYV,
	},
	{
		MEDIA_BUS_FMT_YVYU8_1X16,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_YVYU,
	},
	{
		MEDIA_BUS_FMT_UYVY8_1X16,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_UYVY,
	},
	{
		MEDIA_BUS_FMT_VYUY8_1X16,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_VYUY,
	},
	{
		MEDIA_BUS_FMT_RGB888_1X24,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_RGB24,
	},
	{
		MEDIA_BUS_FMT_YUYV8_2X8,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_YUYV,
	},
	{
		MEDIA_BUS_FMT_YVYU8_2X8,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_YVYU,
	},
	{
		MEDIA_BUS_FMT_UYVY8_2X8,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_UYVY,
	},
	{
		MEDIA_BUS_FMT_VYUY8_2X8,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_VYUY,
	},
	/*
	 * The below two formats are not supported by VI4,
	 * keep them at the last to ensure they get discarded
	 */
	{
		MEDIA_BUS_FMT_XRGGB10P_3X10,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_XRGGB10P,
	},
	{
		MEDIA_BUS_FMT_XBGGR10P_3X10,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_XRGGB10P,
	},
};

 $TOP/kernel_src/kernel/nvidia/drivers/media/platform/tegra/camera/sensor_common.c
static int extract_pixel_format(
	const char *pixel_t, u32 *format)
{
	size_t size = strnlen(pixel_t, OF_MAX_STR_LEN);

	printk("fangzhu pixel_t:%s \n", pixel_t);
	if (strncmp(pixel_t, "bayer_bggr10", size) == 0)
		*format = V4L2_PIX_FMT_SBGGR10;
	else if (strncmp(pixel_t, "bayer_bggr8", size) == 0)
		*format = V4L2_PIX_FMT_SBGGR8;
	else if (strncmp(pixel_t, "bayer_rggb10", size) == 0)
		*format = V4L2_PIX_FMT_SRGGB10;
	else if (strncmp(pixel_t, "bayer_grbg10", size) == 0)
		*format = V4L2_PIX_FMT_SGRBG10;
	else if (strncmp(pixel_t, "bayer_gbrg10", size) == 0)
		*format = V4L2_PIX_FMT_SGBRG10;
	else if (strncmp(pixel_t, "bayer_bggr12", size) == 0)
		*format = V4L2_PIX_FMT_SBGGR12;
	else if (strncmp(pixel_t, "bayer_rggb12", size) == 0)
		*format = V4L2_PIX_FMT_SRGGB12;
	else if (strncmp(pixel_t, "bayer_gbrg12", size) == 0)
		*format = V4L2_PIX_FMT_SGBRG12;
	else if (strncmp(pixel_t, "bayer_grbg12", size) == 0)
		*format = V4L2_PIX_FMT_SGRBG12;
	else if (strncmp(pixel_t, "rgb_rgb88824", size) == 0)
		*format = V4L2_PIX_FMT_RGB24;
	else if (strncmp(pixel_t, "bayer_wdr_pwl_rggb12", size) == 0)
		*format = V4L2_PIX_FMT_SRGGB12;
	else if (strncmp(pixel_t, "bayer_wdr_pwl_gbrg12", size) == 0)
		*format = V4L2_PIX_FMT_SGBRG12;
	else if (strncmp(pixel_t, "bayer_wdr_pwl_grbg12", size) == 0)
		*format = V4L2_PIX_FMT_SGRBG12;
	else if (strncmp(pixel_t, "bayer_wdr_dol_rggb10", size) == 0)
		*format = V4L2_PIX_FMT_SRGGB10;
	else if (strncmp(pixel_t, "bayer_xbggr10p", size) == 0)
		*format = V4L2_PIX_FMT_XBGGR10P;
	else if (strncmp(pixel_t, "bayer_xrggb10p", size) == 0)
		*format = V4L2_PIX_FMT_XRGGB10P;
	else if (strncmp(pixel_t, "yuv_yuyv16", size) == 0)
		*format = V4L2_PIX_FMT_YUYV;
	else if (strncmp(pixel_t, "yuv_yvyu16", size) == 0)
		*format = V4L2_PIX_FMT_YVYU;
	else if (strncmp(pixel_t, "yuv_uyvy16", size) == 0)
		*format = V4L2_PIX_FMT_UYVY;
	else if (strncmp(pixel_t, "yuv_vyuy16", size) == 0)
		*format = V4L2_PIX_FMT_VYUY;
	else {
		pr_err("%s: Need to extend format%s\n", __func__, pixel_t);
		return -EINVAL;
	}

	return 0;
}

Hi,
The attached 2M_uyvy_video.yuv is in YUYV. If your sensor driver is ready, please try the gstramer command and check if you can see correct output:

$ gst-launch-1.0 v4l2src ! video/x-raw,width=1920,height=1080,format=YUY2 ! videoconvert ! video/x-raw,format=I420 ! xvimagesink sync=0

Hi DaneLLL,

I change to this command, the color display is still not correct.

gst-launch-1.0 v4l2src ! video/x-raw,width=1920,height=1080,format=YUY2 ! videoconvert ! video/x-raw,format=I420 ! xvimagesink sync=0
nvidia@xavier:~$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'YUYV'
	Name        : YUYV 4:2:2
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)

	Index       : 1
	Type        : Video Capture
	Pixel Format: 'YUYV'
	Name        : YUYV 4:2:2
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)

Hi,
It looks to be something wrong in sensor driver. Probably it swaps YUYV and UYVY. Please confirm you can run the command and get valid YUYV frame data:

$ v4l2-ctl -V --set-fmt-video=width=1920,height=1080,pixelformat=YUYV --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=2M_uyvy_video.yuv -d /dev/video4

Generally if v4l2-ctl command works, gstreamer command works as well.

Hi DeneLLL,

I think maybe it’s not the sensor driver, because the sensor driver keep not change, just change the dtsi configuraion from uyvy to yuyv, and get the yuv data is yuyv and uyvy.
if dtsi set pixel_phase = “uyvy”, get yuv data is yuvy.
if dtsi set pixel_phase = “yuyv”, get yuv data is uyvy.

nvidia@xavier:~$ v4l2-ctl -V --set-fmt-video=width=1920,height=1080 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=2M_uyvy_video.yuv -d /dev/video0
<
Format Video Capture:
	Width/Height      : 1920/1080
	Pixel Format      : 'YUYV'
	Field             : None
	Bytes per Line    : 3840
	Size Image        : 4147200
	Colorspace        : sRGB
	Transfer Function : Default (maps to sRGB)
	YCbCr/HSV Encoding: Default (maps to ITU-R 601)
	Quantization      : Default (maps to Limited Range)
	Flags             : 

2M_yuyv_dtsi_video.yuv (4.0 MB)

hello dennis.jiang,

could you please try dump more frames for confirmation,
for example, --stream-count=3 to examine 2nd and also 3rd frames.
thanks

Hi Jerry,

Please refer to this 5frame file.

nvidia@xavier:~$ v4l2-ctl -V --set-fmt-video=width=1920,height=1080 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=5 --stream-to=2M_yuyv_dtsi_video.yuv -d /dev/video0
<<<<<
Format Video Capture:
	Width/Height      : 1920/1080
	Pixel Format      : 'YUYV'
	Field             : None
	Bytes per Line    : 3840
	Size Image        : 4147200
	Colorspace        : sRGB
	Transfer Function : Default (maps to sRGB)
	YCbCr/HSV Encoding: Default (maps to ITU-R 601)
	Quantization      : Default (maps to Limited Range)
	Flags             : 

2M_yuyv_dtsi_video.yuv (19.8 MB)

Hi,
Either YUYV or UYVY is same for the VI interface. It captures each frame data in 4147200 bytes. No further process to the frame data. From the YUV files you attached, it looks like the sensor can output in either YUYV or UYVY. Please check the driver and hard code to YUYV for a try.

Hi DaneLLL,

Thanks for your patient and reply.
My driver is gmsl yuv camera, can you describe more about the hard code to YUYV ? Not sure I understand.