Mipi-driver : frame start syncpt timeout!0

hello AlbinRaj0,

may I know what’s the commands you’re used to access camera streams.

please have a try to modify VI driver and increase timeout settings by hacking chan->timeout values, it’s 2500ms by default.
for example,

static int tegra_channel_capture_frame_single_thread(
                        struct tegra_channel *chan,
                        struct tegra_channel_buffer *buf)
        for (index = 0; index < valid_ports; index++) {
                err = nvhost_syncpt_wait_timeout_ext(chan->vi->ndev,
                        chan->syncpt[index][0], thresh[index],
                        chan->timeout, NULL, &ts);

it seems there’re abnormal timestamp values in post #27, they should not be negative.
could you please checking the tracing logs again,

Hi Jerry,
Thanks for your support. There is no change in trace log still its showing time stamp value abnormal same as post #27 .
I will modify Vi driver.
For custom sensor access I am using following commands and yavta c code which I have attached with this.
v4l2-ctl -d /dev/video0 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=t.raw

v4l2-ctl -d /dev/video0 --list-formats-ext

v4l2-ctl --set-fmt-video=width=256,height=1,pixelformat=RG8 --stream-mmap --set-ctrl=sensor_mode=0 --stream-count=1 -d /dev/video0

v4l2-ctl -d /dev/video0 --stream-mmap --stream-count=1 --stream-to=test.raw --verbose

./yavta -c1 -f SRGGB8 -s 256x1 -F /dev/video0


Albin Raj R J
yavta.c (29.3 KB)

Hi Jerry,
After increasing chan->timeout, abnormal timestamp vale gone but its not reading data and showing same control error. I have attached my trace log.
I have a doubt. My custom sensor can send maximum 4 row of data per second

so that I changed active_h=4. but it changed to default value 32. Is it causing issue. If issue which driver file I need to change 32 to 4. Thanks.

hello AlbinRaj0,

may I know what’s the pixel formats reported by v4l2 standard controls.
i.e. $ v4l2-ctl -d /dev/video0 --list-formats-ext

there’s default settings of Tegra video input device; please refer to below kernel sources for the minimum and maximum width and height common settings.
for example,

#define TEGRA_MIN_HEIGHT      32U
#define TEGRA_MAX_HEIGHT      32768U

you may also check VI driver,
there’s function to clamp the active dimension with definition settings.
for example,

static void tegra_channel_fmt_align(struct tegra_channel *chan,
				const struct tegra_video_format *vfmt,
				u32 *width, u32 *height, u32 *bytesperline)

	*height = clamp(*height, TEGRA_MIN_HEIGHT, TEGRA_MAX_HEIGHT);

Hi Jerry,
Thanks for your response. The pixel formats reported by v4l2 standard controls.
v4l2-ctl -d /dev/video0 --list-formats-ext
Index : 0
Type : Video Capture
Pixel Format: ‘RGGB’
Name : 8-bit Bayer RGRG/GBGB
Size: Discrete 256x1
Interval: Discrete 0.250s (4.000 fps)
Jerry I am getting two types of register error with frame syncpt error. When am running my custom sensor in continous clock, register reading Multi bit SOT error(TEGRA_CSI_CILX_STATUS: 0x00020020) with frame syncpt timeout and next scenario is when am running my custom sensor in discontinous clock register reading control clock error(TEGRA_CSI_CILX_STATUS: 0x00000001) with frame syncpt timeout.
Jerry could you give me any idea where will be the issue.

Thank you so much for your support.

hello AlbinRaj0,

I would also like to emphasize that we don’t have such camera sensor to output single row image for verification.

in theory, you should assign the capture settings according to the image formats reported by v4l2 standard controls. however, as you can see from post #36, there’s limitation of common settings.

please have a try to update VI driver to remove minimum height definition for testing.

Hi Jerry,
Thanks for your response. There is no change in error after reducing minimum height to 1 in VI driver.
Jerry, Is it possible to read that error packet and print in dmesg before it get discard. Is there anyway to print that error packet.

Thanks for support.

I have updated LP clock(cil_clk_mhz= 50) in csi2_fop.c driver file as 50MHz as per our sensor spec. After that, SOT multi bit error in data_lane 0 got cleared and now it’s generating error only in data_lane 1. I have attached driver and dmesg screen shot.
My doubt is while we are changing cil_clk_mhz = 50, do we need to change csi_clk_mhz as 50MHZ or not?
If so, how to calculate csi_clk_mhz?


hello AlbinRaj0,

it’s defined by TEGRA_CLOCK_CSI_PORT_MAX in the vi2_registers.h,
for example,

Hello Jerry,
Thanks for your respone. I have changed TEGRA_CLOCK_CSI_PORT_MAX to 50MHZ but there is no change in error(SOT multi bit error in data_lane 1). Is any other solution for debug it. Thanks.

Hi Jerry,
After changing custom sensor data rate to 1.3GBps and in driver chan->timeout = msecs_to_jiffies(1000), I can able to read My first packet. Thanks for your support.
Now issue is I can able to read only 1fps and My sensor sending 4 packets per second but I can read only first packet. I have a doubt that why my sensor need chan->timeout = msecs_to_jiffies(1000) and imx219 just enough chan->timeout = msecs_to_jiffies(200).
Could you please give me any idea to debug it. Thanks a lot.
I have attached my dmesg.

hello AlbinRaj0,

glad to know you had some progress.

there’re frame_start and mw_ack_done for waiting hardware sync-points of frame-start and frame-end sensor signaling.
the hardware sync-points using different index to indicate this, for example, chan->syncpt[index][0] for frame_start; chan->syncpt[index][1] for mw_ack_done.

it’ll only looking for frame_start if you’re using single thread approach for frame captures,
for example,

static int tegra_channel_capture_frame(struct tegra_channel *chan,
                                        struct tegra_channel_buffer *buf)
        if (chan->low_latency)
                ret = tegra_channel_capture_frame_multi_thread(chan, buf);
                ret = tegra_channel_capture_frame_single_thread(chan, buf);

that MW_ACK_DONE sync-point timeout error should be caused by waiting timeout for the last frame to terminate the capture process.
you might have a try to increase the timeout limitation for verification.

Hi Jerry.
Thanks for information. I have tried multi thread approach also, but result is same, it’s reading only 1fps. Is any other solution for reading 4fps. And why its reading only one frame and Is I want to enable any register for interlaced support. Thanks.

hello AlbinRaj0,

all the reference camera driver we’re currently had were based-on continuous mode.
since the camera driver development is the same as continuous mode camera, but frame input is different.
is it possible to configure it as continuous mode for verification? may I know what’s the use-case of interlaced more.

Hello Jerry,
Thanks for your response. I will try to configure My sensor as continuous mode. For continuous mode only I mentioned as interlaced mode. My sensor sending 4 packet in one second but I found that driver is taking one second to processing first packet itself that why it’s missing all other packet and always reading first packet only.
Could you give any Idea why it’s taking one second(chan->timeout = 250) for reading packet. Thanks for your time.

hello AlbinRaj0,

may I know what’s contents of these 4 packets, thanks

Hi Jerry,
It contain just Hex character. Example packet one contain 0xFF (256 bytes), packet 2 contain 0x00( 256 ) packet 3 contain 0xCC( 256 bytes) and packet 4 contain 0xDD (256 bytes). When my sensor sending this packet to jetson nano I can read only 0xFF(256 bytes). Could you please suggest any idea to debug it.
Thank for your support.

hello AlbinRaj0,

since we don’t such sensor to output single row data for validation, I can only imagine there’s signal to indicate the end of package, which makes VI to recognize it as frame-end.

Hi Jerry,
I am happy to say that we found the issue that is present in Virtual channel ID. Thank lot for you support. I praise God for your support.

1 Like