I don’t see SHORT_FRAME messages.
anyways…
could you please try disable sensor controls, such as gain/exposure/frame-length settings.
since you’ve mentioned it works occasionally, it doubts different register settings writing to sensor cause such failure.
My video source is FPGA
So no any gain/exposure/frame-length settings …
I don’t need to control the FPGA, the data will be automatically output when the power is turned on, and it can be displayed normally in r32.6.1 (although there is a SHORT_FRAME)… But it is difficult to display normally in r35.3.1. I would like to ask the difference between the two versions ??
the major difference is kernel version. as you can see… Camera Driver Porting, it’s now using kernel-5.10
you may check VI-5 driver, please configure a higher timeout value. #define CAPTURE_TIMEOUT_MS 2500
and… for your video source, is it possible to trigger a reset when you execute v4l pipeline?
it seems something wrong for your 2nd camera device tree setting.
please update tegra_sinterface accordingly.
for example,
gen3_b@2e { //0x2e take any address
mode0 {
mclk_khz = "37125";
num_lanes = "4";
tegra_sinterface = "serial_a"; <== it should be serial_b
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
ligen3_gen3_out1: endpoint {
port-index = <2>;
this looks incorrect. csi_pixel_bit_depth = "24"; and dynamic_pixel_bit_depth = "16";.
it’s Pixel bit depth [bit] for csi_pixel_bit_depth, and since it’s SDR sensor, you should have dynamic_pixel_bit_depth=csi_pixel_bit_depth.
this should be minor, please update position property as rear and front for your 2-cam in the system,
for example,
modules {
module0 {
status = "okay";
badge = "gen3_top_i2c0_b";
position = "front";
module1 {
status = "okay";
badge = "gen3_top_i2c1_b";
position = "centerleft"; <== to revise this as rear
(4) according to VI tracing logs… it’s driver side kept waiting for sensor frames. there’s no sensor related signaling, such as SOF, EOF…etc.
since you’ve image data will always be output, you might try issue a software reset (on the SerDes) to alignment with software. this reset event should trigger before s_stream() for capture engine waiting 1st start-of-frame of the MIPI signaling.
this reset is done by sender side. i.e. your FPGA device.
if you can’t control video sources (FPGA) for software reset, how about toggle the power supply off/on then capture the frames.
it should be minor,
please have a try for sending an EoS (with -e options) when shutdown the pipeline.
for example, gst-launch-1.0 -e v4l2src device=/dev/video0 ! video/x-raw, format=(string)BGRA, width=(int)400, height=(int)400 ! videoconvert ! xvimagesink -ev
please have a try for sending an EoS (with `-e` options) when shutdown the pipeline. ?
It will fail for the first time after starting up. Only after restarting the test for many times will there be a low probability that the screen can be successfully displayed. If it is successfully turned on, then close the program and then turn on the second time, it will fail.
let me double confirm that…
(1) you’re not sending a reset for your FPGA, but keep retrying this gst pipeline to have low probability to fetch the stream.
(2) may I know what’s the timeout property you’ve configured now?
(3) had you try below commands to boost the clocks to ignore system level config.
for example,
sudo su
echo 1 > /sys/kernel/debug/bpmp/debug/clk/vi/mrq_rate_locked
echo 1 > /sys/kernel/debug/bpmp/debug/clk/isp/mrq_rate_locked
echo 1 > /sys/kernel/debug/bpmp/debug/clk/nvcsi/mrq_rate_locked
echo 1 > /sys/kernel/debug/bpmp/debug/clk/emc/mrq_rate_locked
cat /sys/kernel/debug/bpmp/debug/clk/vi/max_rate |tee /sys/kernel/debug/bpmp/debug/clk/vi/rate
cat /sys/kernel/debug/bpmp/debug/clk/isp/max_rate | tee /sys/kernel/debug/bpmp/debug/clk/isp/rate
cat /sys/kernel/debug/bpmp/debug/clk/nvcsi/max_rate | tee /sys/kernel/debug/bpmp/debug/clk/nvcsi/rate
cat /sys/kernel/debug/bpmp/debug/clk/emc/max_rate | tee /sys/kernel/debug/bpmp/debug/clk/emc/rate
let’s try putting some delay from VI driver side before FPGA power-on for sending frames.
it’s sensor devices should be stream-on after the tegra_channel_set_stream() is complete,
for example, $public_sources/kernel_src/kernel/nvidia/drivers/media/platform/tegra/camera/vi/channel.c
int tegra_channel_set_stream(struct tegra_channel *chan, bool on)
{
...
//to add schedule_delayed_work() to wait for 400ms here..
if (ret == 0)
atomic_set(&chan->is_streaming, on);
[ 176.284452] [TEGRA-GEN3]gen3_start_streaming:
[ 176.284456] gen3 0-001e: gen3_start_streaming: Mode ID : 0
[ 176.284460] [TEGRA-CHANNEL]tegra_channel_set_stream:dd1
[ 176.284466] [TEGRA-CHANNEL]tegra_channel_set_stream:dd2
[ 176.699648] defense_work_handler function.
[ 181.979394] tegra-camrtc-capture-vi tegra-capture-vi: uncorr_err: request timed out after 5500 ms
[ 181.979606] tegra-camrtc-capture-vi tegra-capture-vi: err_rec: attempting to reset the capture channel
[ 181.980296] (NULL device *): vi_capture_control_message: NULL VI channel received
[ 181.980445] t194-nvcsi 13e10000.host1x:nvcsi@15a00000: csi5_stream_close: Error in closing stream_id=0, csi_port=0
[ 181.980637] (NULL device *): vi_capture_control_message: NULL VI channel received
[ 181.980780] t194-nvcsi 13e10000.host1x:nvcsi@15a00000: csi5_stream_open: VI channel not found for stream- 0 vc- 0
[ 181.981292] tegra-camrtc-capture-vi tegra-capture-vi: err_rec: successfully reset the capture
There is a situation found that the success rate of the first test after each power failure is higher…
Seems to be related to what you said earlier
this reset is done by sender side. i.e. your FPGA device.
if you can’t control video sources (FPGA) for software reset, how about toggle the power supply off/on then capture the frames.