Jetson Orin Nano YUV V4l2 data order

We are using the Jetson CSI interface to receive YUV data, but we found a very strange problem. When we used the test data (counter) 00 01 02 03 04, we found that the order of data received from v4l2 was different for different data formats.
As shown in the figure below:

The data sent by the camera is fixed and will not change with the format, but the received data will change with the format. Will Jetson do any related processing? How to turn off this processing?

hello t_msg,

am I understand correctly you’re using different pixel format types in the pipeline to fetch the stream?
please also note that… YUYV is referred as YUY2 in the gst pipeline.

Yes, we modified the format in the driver, so we can get the data in a different format.
Also we did not use gst pipeline, we used v4l2-ctl command to get the data, for example:
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=YUYV --stream-mmap --stream-count=1 --stream-to=test.yuv

hello t_msg,

in reality, you should configure the format types as same as your sensor output.
please check whether it behavior the same with correct settings.

Yes, this is the starting point for us to find this problem. At first, we set the format in the driver to the actual format, but got the wrong result, so we did the above test.
In our understanding, no matter what the driver format is, it should keep the data consistent with the input. We can’t understand why it changes the data order.

hello t_msg,

let’s focus on this…

may I know what’s the sensor output formats, and what’s the pixelformat you’ve used.
please also examine sensor format dump with… $ v4l2-ctl -d /dev/video0 --list-formats-ext
besides… please check the raw dumped content with 3rdparty utility, such as 7yuv.

The output of the sensor is YUYV, and we also set the driver to YUYV:
image

We have verified this sensor on other platforms (e.g. raspberrypi) so we can ensure that its format is correct.
Is there any difference between third-party software and v4l2-ctl? In our understanding, v4l2-ctl should be able to obtain the most original data.

Update:
According to the order of previous tests, we found that we only need to configure the Sensor to UYVY, and the image will be normal no matter what the driver format is.
Jetson seems to think that the YUV input format must be UYVY, and then it will adjust the pixel order according to the driver format?
Is there a problem with our driver configuration, or is this a feature of Jetson?

hello t_msg,

it looks related to Topic 273771.

please see-also VI driver,
i.e. $public_sources/kernel_src/kernel/nvidia/drivers/media/platform/tegra/camera/sensor_common.c
here’s conversion from device tree properties as v4l formats, do you have correct settings?
for instance, mode_type + pixel_phase + csi_pixel_bit_depth.

Yes, we confirmed that the device tree format is correct, and we also tried to change it to other formats, and the test results did not change

hello t_msg,

may I double confirm what’s the sensor’s output pixel formats. also, what’s the sensor you’re working with.

Of course, this is the device tree configuration:

mode0 {
	mclk_khz = "24000";
	num_lanes = "2";
	tegra_sinterface = "serial_b";
	phy_mode = "DPHY";
	discontinuous_clk = "yes";
	dpcm_enable = "false";
	cil_settletime = "0";
	lane_polarity = "6";
	active_w = "1280";
	active_h = "720";
	mode_type = "yuv";
	pixel_phase = "yuyv";
	csi_pixel_bit_depth = "16";
	readout_orientation = "90";
	line_length = "3448";
	inherent_gain = "1";
	mclk_multiplier = "9.33";
	pix_clk_hz = "300000000";

	gain_factor = "16";
	framerate_factor = "1000000";
	exposure_factor = "1000000";
	min_gain_val = "16"; /* 1.00x */
	max_gain_val = "170"; /* 10.66x */
	step_gain_val = "1";
	default_gain = "16"; /* 1.00x */
	min_hdr_ratio = "1";
	max_hdr_ratio = "1";
	min_framerate = "2000000"; /* 2.0 fps */
	max_framerate = "21000000"; /* 21.0 fps */
	step_framerate = "1";
	default_framerate = "21000000"; /* 21.0 fps */
	min_exp_time = "13"; /* us */
	max_exp_time = "683709"; /* us */
	step_exp_time = "1";
	default_exp_time = "2495"; /* us */

	embedded_metadata_height = "0";
};

driver mbus config we are trying:

MEDIA_BUS_FMT_YUYV8_1X16,
MEDIA_BUS_FMT_YVYU8_1X16,
MEDIA_BUS_FMT_UYVY8_1X16,
MEDIA_BUS_FMT_VYUY8_1X16,

In addition, we are using FPGA, so it is not a specific sensor.

hello t_msg,

according to device tree settings.

its pixel format will be configured to yuv_yuyv16 for using V4L2_PIX_FMT_YUYV, which as known as YUV422.

did you confirmed you’ve change sensor config correctly?
according to Topic 193013, it looks there’s no issue with YUYV.

Yes, as I mentioned above, we are using the test data of the FPGA, which is incremented in sequence, but the order of the data I received using v4l2 is indeed messed up, it is swapped, and it changes with the yuv format set by the driver. In our understanding, this should not happen.

Regarding other people’s settings, I don’t know what YUV format their camera outputs. If the camera output format is UYVY, then no matter what YUV format is set in the driver, the image will always be correct. (You can check the table provided above)
At present, our only solution is to configure the Sensor output to UYVY, so that the format is correct.

Hi,
We support all YUV422 formats(YUYV, UYVY, YVYU, VYUY). Please check which format your sensor supports and correctly configured it in device tree. It looks like the format in device tree does not fit the sensor driver, triggering this mismatch.

Thanks for your reply.
I have a question. If the format does not match, will Jetson modify the order of the image data? Because from what I can see so far, the data is indeed modified, which I think should not happen.

Hi,
No. The frame data is same as what the camera sensor generates. If you check the saved YUV and it is UYVY, the camera sensor is generating UYVY.

Yes, I think so too, but strangely we get different answers by using FPGA to generate test data. This is our biggest doubt at present.

Hi,
We would suggest check the FPGA. To investigate why it does not generate data in correct format. Our hardware engines do not modify the data. It is captured and stored as what the camera source generates.

Hi t_msg.
Do you have Orin nano devkit on hand and verify on that?