for camera mode running with the same resolution, it should have different pixel clocks due to different fps.
here’s one of calculate formula for your reference. please see-also Sensor Pixel Clock section.
i.e. pixel_clk_hz = sensor output size × frame rate
this should also works since the clock settings is higher to drive 30-fps sensor mode.
conversely, it’ll failed if you define 30-fps mode settings but launching 60-fps mode.
hello jerry,
ok.thanks for your answer.
So, is it correct to understand that the 4K60 mode can also be used for 4K30 as long as the pixel clock meets the requirement for 4K30 (greater than 38402160 30)?
that’s correct, and it should works for validation stage.
since it’s clock configuration, you may still use different mode definitions as your formal camera solution.
“According to the diagram, the SOC is connected to the sensor via 8 lanes. When transmitting at 4K60 using 8 lanes, should the configuration be num_lanes = “8” and bus-width = <8>? However, whether the sensor needs to use 8 lanes to transmit data for other lower resolutions such as 4K30 and 1080P60 . For example, in one modex with a resolution of 1080P, only 4 lanes are used for transmission. Can this configuration, with num_lanes = “4” and bus-width = <8>, cause any anomalies?”
To add more information, the sensor is using MIPI to transmit 4K60 RGB888 and it uses two 4-lane gang modes. The sensor splits into two 4-lane output sources. In this case, how should I configure the DTS mode with num-lanes and bus-width? Are there any considerations for DTS configuration in this mode? Also, when transmitting at lower resolutions such as 4K30 or 1080p, the sensor only uses 4 lanes to transmit data, while the other group of 1 clk+4 lanes is used in copy mode to output the same data. Should I disable the other 4 lanes output?
for your use-case, you should have device tree settings to configure as 8-lane settings.
once device tree configure as 8-lane, two CSI bricks (assume it’s CSI-A/B + CSI-C/D) will be used to capture buffers. but… there’s logic (in VI driver) to fall back as single CSI brick (only 4-lane being used) since it’s 4K30 image capture. that’s tricky by setting num-lanes=<4> in device tree, (assume it’s using CSI-C/D). it may works as long as there’s correct MIPI signaling on CSI-C/D.
hence,
you should have the unused 4-lanes keep connecting, disable copy mode and only output 4-lane signals to CSI brick.
hi jerry,
“Okay, I’ll go turn off copymode for testing. Also, how can I differentiate between 4K30 and 4K60 in the modeX list while in this mode, as 4K30 uses ‘num-lanes=<4>’ and 4K60 uses ‘num-lanes=<8>’? When operating on the /dev/video0 node, my understanding is that VIDIOC_S_FMT can be used to switch to the desired resolution using W and H parameters, but there are no request parameters for frame rate, and since different frame rates may require a different num-lanes, how do I configure the modex accurately for the same resolution but different frame rates? Or can I use ‘num-lanes=<8>’ for 4K30 even if it uses a 4-lane transmission?”
please refer to developer guide, Device Properties.
you may toggle use_sensor_mode_id=1 and uses the TEGRA_CAMERA_CID_SENSOR_MODE_ID control to select a specific sensor mode.
hi,jerry
“Do I need to add TEGRA_CAMERA_CID_SENSOR_MODE_ID to the ctrl_cid_list in the sensor driver structure?
static struct tegracam_ctrl_ops imx219_ctrl_ops = {
.numctrls = ARRAY_SIZE(ctrl_cid_list),
.ctrl_cid_list = ctrl_cid_list,
.set_gain = imx219_set_gain,
.set_exposure = imx219_set_exposure,
.set_frame_rate = imx219_set_frame_rate,
.set_group_hold = imx219_set_group_hold,
};
Also, does ID=0 refer to mode0 in the DTS modeX or is it related to the array index of the static const struct camera_common_frmfmt lt6911uxc_frmfmt?”
{{1920, 1080}, lt6911uxc_60fps, 1, 0, 0},
{{3840, 2160}, lt6911uxc_60fps, 1, 0, 1},
{{1920, 1080}, lt6911uxc_60fps, 1, 0, 2},
For 4K30, the MIPI CLK output from the LT6911 front-end chip to NVIDIA is calculated as MipiClock = 4 * Byteclk in Byteclk, where PixelBytes is the number of bytes used for each pixel, for example, RGB888/YUV44 will have PixelBytes = 3, and RGB565/YUV422 will have PixelBytes = 2. For a 4K30Hz HDMI input and single-port 4-lane CSI YUV422 output, for example, Pixclk = 297MHz, and Byteclk = (297*2/4)MHz+5MHz. The Byteclk value is generally added with 5~10 to the theoretical value for a horizontal screen and 10~20 for a vertical screen. The theoretical value added is 5~10M, and it may affect the CSI receiving data of NVIDIA?
I am currently configured to receive sensor RGB24 data with 4 lanes and have tested the following resolutions:
3840x1080@30 received correctly
3840x1200@30 received abnormally
3840x1184@30 received abnormally
1920x2160@30 received correctly
1920x2160@60 received correctly
3840x2160@30 received abnormally
I found that when W = 3840, if H > 1080, it seems to time out the reception. Is this related to the sensor MIPI output or the NVIDIA CSI configuration?
Here is the DTS configuration. Different resolutions use mode1 with only active_w/h changed, while everything else remains the same.
Would the pixel clock difference affect CSI reception? I always restart the program for testing. In addition, the reception of 3840x540*30 is also normal.