Hi,
I’m trying to update our system, switching from Jetpack 3.2.x to Jetpack 4.5.1. I’m having issues ingesting video with our ported driver though.
We ingest video via an FPGA, which we have to tell (from userspace) to start sending video via CSI. The FPGA waits for the start of an external video (SDI) frame before it actually starts sending video over CSI as YUV422. We had to be careful to get FPGA video sending enabled very soon after the camera driver has been enabled, to avoid timeouts. We have found that if we fail to do this quickly then the CSI/VI never seem to find the start of the frame. This has actually worked well for us on 3.2.x.
I’m struggling to get this to work on 4.5.1. Dmesg says:
[ 1045.562417] v4fpga 2-0010: ext_camera_power_on: power on
[ 1045.575874] v4fpga 2-0010: ext_camera_power_on: powered on
[ 1045.789529] tegra-vi4 15700000.vi: PXL_SOF syncpt timeout! err = -11
[ 1045.795895] tegra-vi4 15700000.vi: tegra_channel_error_recovery: attempting to reset the capture channel
[ 1045.805593] nvcsi 150c0000.nvcsi: csi4_stream_check_status (0) ERROR_STATUS2VI_VC0 = 0x00000004
[ 1045.814318] nvcsi 150c0000.nvcsi: csi4_stream_check_status (0) INTR_STATUS 0x00000004
[ 1045.822174] nvcsi 150c0000.nvcsi: csi4_stream_check_status (0) ERR_INTR_STATUS 0x00000004
[ 1046.037601] tegra-vi4 15700000.vi: PXL_SOF syncpt timeout! err = -11
[ 1046.043966] tegra-vi4 15700000.vi: tegra_channel_error_recovery: attempting to reset the capture channel
[ 1046.053769] nvcsi 150c0000.nvcsi: csi4_stream_check_status (0) ERROR_STATUS2VI_VC0 = 0x00000004
[ 1046.062483] nvcsi 150c0000.nvcsi: csi4_stream_check_status (0) INTR_STATUS 0x00000004
[ 1046.070394] nvcsi 150c0000.nvcsi: csi4_stream_check_status (0) ERR_INTR_STATUS 0x00000004
[ 1046.103195] tegra-vi4 15700000.vi: Status: 2 channel:00 frame:0002
[ 1046.109476] tegra-vi4 15700000.vi: timestamp sof 1054280353632 eof 1054297017760 data 0x000000a0
[ 1046.118715] tegra-vi4 15700000.vi: capture_id 101 stream 0 vchan 0
[ 1046.289532] tegra-vi4 15700000.vi: ATOMP_FE syncpt timeout!
[ 1046.295113] tegra-vi4 15700000.vi: tegra_channel_error_recovery: attempting to reset the capture channel
[ 1046.305611] nvcsi 150c0000.nvcsi: csi4_stream_check_status (0) ERROR_STATUS2VI_VC0 = 0x00000004
[ 1046.314340] nvcsi 150c0000.nvcsi: csi4_stream_check_status (0) INTR_STATUS 0x00000004
[ 1046.322184] nvcsi 150c0000.nvcsi: csi4_stream_check_status (0) ERR_INTR_STATUS 0x00000004
[ 1046.330938] nvcsi 150c0000.nvcsi: csi4_stream_check_status (0) ERROR_STATUS2VI_VC0 = 0x00000004
As you can see, there is a fairly quick timeout which we don’t see this on 3.2.x. There is then a 16ms(?) sof to eof, which make sense given the 720p60 input. I don’t know what the ATOMP_FE sync point time out is trying to tell me though. From this point it just fails to get any further capture done as the retries are not visible to our userspace code and the FPGA is never resynchronised.
I tried playing around with set_mode_delay_ms to change the timeout period. this had very little effect. The Camera/FPGA dts stanza looks like:
v4fpga@10 {
reg = <0x10>;
devnode = "video0";
mclk = "extperiph1";
compatible = "vision4ce,v4fpga";
clock-names = "extperiph1", "pllp_grtba";
reset-gpios = <0x27 0x0 0x0>;
physical_h = "4.930";
physical_w = "5.095";
clocks = <0x10 0x59 0x10 0x10d>;
sensor_model = "v4fpga";
status = "okay";
vif-supply = <&en_vdd_cam>;
iovdd-reg = "vif";
vana-supply = <&en_vdd_cam_hv_2v8>;
avdd-reg = "vana";
vdig-supply = <&en_vdd_cam_1v2>;
dvdd-reg = "vdig";
vvcm-supply = <&en_vdd_vcm_2v8>;
vcmvdd-reg = "vvcm";
use_sensor_mode_id = <0>;
set_mode_delay_ms = "10000";
I set the timeout delay to 10seconds here, but did try other smaller values (100ms).
As the FPGA is fully powered by it’s own supplies, the driver actually ignores the regulators and I assume I can remove this later.
There are two obvious differences in our setup for 4.5.1.
1: the camera driver is an external module. Previously we had to included it as a built-in and recompile the kernel, which we really don’t want to do this time around. This leads into the second one…
2: As we have to ingest YUV422, and the nvidia kernel only accepts bayer sensors, we lie to the camera block and then immediately “correct” the pixel formats:
err = camera_common_initialize(common_data, "v4fpga");
if (err) {
dev_err(&client->dev, "Failed to initialize v4fpga.\n");
return err;
}
/* hard code the pixel format as uyvy is not supported by nvidia dt parser */
sensor = &common_data->sensor_props;
for (i = 0; i < sensor->num_modes; ++i)
sensor->sensor_modes[i].image_properties.pixel_format = V4L2_PIX_FMT_UYVY;
The mode for this 720p60 input looks like:
mode1 {
inherent_gain = "1";
pix_clk_hz = "74250000";
max_gain_val = "16.0";
min_hdr_ratio = "1";
min_framerate = "24";
cil_settletime = "0";
max_exp_time = "683709";
active_h = "720";
active_w = "1280";
mclk_khz = "37125";
min_gain_val = "1.0";
max_hdr_ratio = "64";
max_framerate = "60";
tegra_sinterface = "serial_a";
phy_mode = "DPHY";
line_length = "1980";
mode_type = "yuv";
pixel_phase = "uyvy";
pixel_t = "bayer_xrggb10p";
csi_pixel_bit_depth = "8";
dynamic_pixel_bit_depth = "8";
readout_orientation = "0";
mclk_multiplier = "2";
num_lanes = "4";
discontinuous_clk = "no";
min_exp_time = "13";
embedded_metadata_height = "0";
};
Does this approach of overriding the pixel formats sound workable? Any ideas why the set_mode_delay_ms seems to have no effect?
Thanks,
Ratbert