We are using the Jetson CSI interface to receive YUV data, but we found a very strange problem. When we used the test data (counter) 00 01 02 03 04, we found that the order of data received from v4l2 was different for different data formats.
As shown in the figure below:
The data sent by the camera is fixed and will not change with the format, but the received data will change with the format. Will Jetson do any related processing? How to turn off this processing?
am I understand correctly you’re using different pixel format types in the pipeline to fetch the stream?
please also note that… YUYV is referred as YUY2 in the gst pipeline.
Yes, we modified the format in the driver, so we can get the data in a different format.
Also we did not use gst pipeline, we used v4l2-ctl command to get the data, for example: v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=YUYV --stream-mmap --stream-count=1 --stream-to=test.yuv
Yes, this is the starting point for us to find this problem. At first, we set the format in the driver to the actual format, but got the wrong result, so we did the above test.
In our understanding, no matter what the driver format is, it should keep the data consistent with the input. We can’t understand why it changes the data order.
may I know what’s the sensor output formats, and what’s the pixelformat you’ve used.
please also examine sensor format dump with… $ v4l2-ctl -d /dev/video0 --list-formats-ext
besides… please check the raw dumped content with 3rdparty utility, such as 7yuv.
The output of the sensor is YUYV, and we also set the driver to YUYV:
We have verified this sensor on other platforms (e.g. raspberrypi) so we can ensure that its format is correct.
Is there any difference between third-party software and v4l2-ctl? In our understanding, v4l2-ctl should be able to obtain the most original data.
Update:
According to the order of previous tests, we found that we only need to configure the Sensor to UYVY, and the image will be normal no matter what the driver format is.
Jetson seems to think that the YUV input format must be UYVY, and then it will adjust the pixel order according to the driver format?
Is there a problem with our driver configuration, or is this a feature of Jetson?
please see-also VI driver,
i.e. $public_sources/kernel_src/kernel/nvidia/drivers/media/platform/tegra/camera/sensor_common.c
here’s conversion from device tree properties as v4l formats, do you have correct settings?
for instance, mode_type + pixel_phase + csi_pixel_bit_depth.
Yes, as I mentioned above, we are using the test data of the FPGA, which is incremented in sequence, but the order of the data I received using v4l2 is indeed messed up, it is swapped, and it changes with the yuv format set by the driver. In our understanding, this should not happen.
Regarding other people’s settings, I don’t know what YUV format their camera outputs. If the camera output format is UYVY, then no matter what YUV format is set in the driver, the image will always be correct. (You can check the table provided above)
At present, our only solution is to configure the Sensor output to UYVY, so that the format is correct.
Hi,
We support all YUV422 formats(YUYV, UYVY, YVYU, VYUY). Please check which format your sensor supports and correctly configured it in device tree. It looks like the format in device tree does not fit the sensor driver, triggering this mismatch.
Thanks for your reply.
I have a question. If the format does not match, will Jetson modify the order of the image data? Because from what I can see so far, the data is indeed modified, which I think should not happen.
Hi,
We would suggest check the FPGA. To investigate why it does not generate data in correct format. Our hardware engines do not modify the data. It is captured and stored as what the camera source generates.