I tried the following combinations
(I made sure the color space provided by my vendor was correct, as I can verify this using other soc chips.)
case1
sensor(yuyv)+ max9295 + max96712 + Orin(dtsi:yuyv)—> yuv文件(uyvy)
case2
sensor(uyvy)+ max9295 + max96712 + Orin(dtsi:yuyv)—> yuv文件(yuyv)
case3
sensor(uyvy)+ max96717f + max96712 + Orin(dtsi:yuyv)—> yuv文件(uyvy)
case4
sensor(uyvy)+ max96717f + max96712 + Orin(dtsi:uyvy)—> yuv文件(yuyv)
Summary: I found that Orin could not get a YUV file with the expected color space, and the result was always the opposite of what was expected.
Although I got the method to get the correct color space from other posts, that method solved the problem by modifying the deserializer color mapping. This is not a method to solve the root cause.
The specific process is exactly the same as what I described in the post above.
Nvidia team, do you have any better solution?
Jetson only display ‘uyvy’ as the right color on screen, if sensor output yuyv(dtsi pixel_phase=yuyv), you should convert color space through CPU, command like:
First of all, thank you for your reply. I know this display method.
I want to know why yuyv is flipped to uyvy. As for adjusting the color space, both the CPU and GPU methods can do it.
Currently, whether I use the following command or the v4l2 standard method to access camera data, the yuv data obtained is the opposite of the actual access format.