Mipi-driver : frame start syncpt timeout!0

Hi Jerry,
Thanks for your reply, I have checked that configuration its running in Max configuration mode. I have added my power config below.
$ sudo /usr/sbin/nvpmodel -q

NVPM WARN: fan mode is not set!

NV Power Mode: MAXN
2] I have calculated pix_clk_hz using formula pixel_clk_hz = 225 Mbps * 2 / 8
3] I have executed following command it hangs.
$ v4l2-ctl -d /dev/video0 --set-fmt-video=width=256,height=1,pixelformat=RG8 --stream-mmap --stream-count=10 --stream-to=test.raw --verbose
Format Video Capture:
Width/Height : 256/32
Pixel Format : ‘RGGB’
Field : None
Bytes per Line : 256
Size Image : 8192
Colorspace : sRGB
Transfer Function : Default (maps to sRGB)
YCbCr/HSV Encoding: Default (maps to ITU-R 601)
Quantization : Default (maps to Full Range)
Flags :

Thank you so much

HI Jerry ,
Now am getting SOT multi bit error could please help me to overcome it . I have attached my dmesg in below.
Thank you.

Thank You

hello albinraj.c0519,

since we don’t have sensor module that output only single row image.
could you please revise active_h= as 32 or some larger values for checking.

Hi Jerry,
Thanks for your response. I have changed active_h=32/64/100, still its populating same error. Can I handle syncpt error inside any driver file.

hello albinraj.c0519,

may I know is there any differences of error message while settings different active_h values?
there’ll be device tree configuration modifications once you’d confirm that’s signaling on the MIPI channels.

Hi Jerry,
There is no difference in error message. It’s populating same error!
I have attached my dmesg and trace screenshot.

hello AlbinRaj0,

there’s an error reports from TEGRA_CSI_CILx_STATUS register, it shows 0x00020021.
you may also refer to Tegra X1 SoC Technical Reference Manual, check [Chapter-29.16 MIPI-CSI registers].
for example,
CSI_CSI_CILA_STATUS_0=0x20021, indicate there’re control error, and also transmission multi-bit errors.

here’s similar discussion thread, Topic 51165, you may also refer to.

Hi Jerry,
I have cleared control error. But I can’t clear Multi bit error. As you refer I gone through Topic 51165 register setting is fine. Here I have attached the section of discontinous clck and settle time.
/ CIL PHY registers setup /
** cil_write(port, TEGRA_CSI_CIL_PAD_CONFIG0, 0x0);**
cil_write(port, TEGRA_CSI_CIL_PHY_CONTROL,csi_settletime << CLK_SETTLE_SHIFT !discontinuous_clk << BYPASS_LP_SEQ_SHIFT | cil_settletime << THS_SETTLE_SHIFT);
Could you give me any Idea to debug Multibit error.

hello AlbinRaj0,

it’s device tree settings to configure discontinuous clock and also settle time.
for example,

                i2c@546c0000 {
                        imx219_single_cam0: rbpcv2_imx219_a@10 {
                                compatible = "nvidia,imx219";
                                mode0 { /* IMX219_MODE_3264x2464_21FPS */
                                        discontinuous_clk = "yes";
                                        cil_settletime = "0";

you should review your sensor specification to have correct clock control configurations.
please also note that,
if you setting cil_settletime=0, the driver attempts to have auto-calibrate according to the mclk_multiplier parameter.
in addition,
there’s erroneous in the [Sensor Driver Programming Guide], the settle time formula is wrong, please check TRM for the correct Ths-settle calculation formula.
it should be…

85ns + 6 * UI < (Ths-settle-programmed + 5) * lp clock periods < 145ns + 10 * UI

Hi Jerry,
Thanks for your response. I am working on it,I will update.

Hi Jerry,
I after changing settling time, Multi bit SOT error gone. But now it populate control error. I have attached dmesg screen shot.
Could you please give me some idea.

in screen shot settle time : 102 is not settle time its CSICIL clock cycles, settle time is 18.

hello AlbinRaj0,

may I know what’s the commands you’re used to access camera streams.

please have a try to modify VI driver and increase timeout settings by hacking chan->timeout values, it’s 2500ms by default.
for example,

static int tegra_channel_capture_frame_single_thread(
                        struct tegra_channel *chan,
                        struct tegra_channel_buffer *buf)
        for (index = 0; index < valid_ports; index++) {
                err = nvhost_syncpt_wait_timeout_ext(chan->vi->ndev,
                        chan->syncpt[index][0], thresh[index],
                        chan->timeout, NULL, &ts);

it seems there’re abnormal timestamp values in post #27, they should not be negative.
could you please checking the tracing logs again,

Hi Jerry,
Thanks for your support. There is no change in trace log still its showing time stamp value abnormal same as post #27 .
I will modify Vi driver.
For custom sensor access I am using following commands and yavta c code which I have attached with this.
v4l2-ctl -d /dev/video0 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=t.raw

v4l2-ctl -d /dev/video0 --list-formats-ext

v4l2-ctl --set-fmt-video=width=256,height=1,pixelformat=RG8 --stream-mmap --set-ctrl=sensor_mode=0 --stream-count=1 -d /dev/video0

v4l2-ctl -d /dev/video0 --stream-mmap --stream-count=1 --stream-to=test.raw --verbose

./yavta -c1 -f SRGGB8 -s 256x1 -F /dev/video0


Albin Raj R J
yavta.c (29.3 KB)

Hi Jerry,
After increasing chan->timeout, abnormal timestamp vale gone but its not reading data and showing same control error. I have attached my trace log.
I have a doubt. My custom sensor can send maximum 4 row of data per second

so that I changed active_h=4. but it changed to default value 32. Is it causing issue. If issue which driver file I need to change 32 to 4. Thanks.

hello AlbinRaj0,

may I know what’s the pixel formats reported by v4l2 standard controls.
i.e. $ v4l2-ctl -d /dev/video0 --list-formats-ext

there’s default settings of Tegra video input device; please refer to below kernel sources for the minimum and maximum width and height common settings.
for example,

#define TEGRA_MIN_HEIGHT      32U
#define TEGRA_MAX_HEIGHT      32768U

you may also check VI driver,
there’s function to clamp the active dimension with definition settings.
for example,

static void tegra_channel_fmt_align(struct tegra_channel *chan,
				const struct tegra_video_format *vfmt,
				u32 *width, u32 *height, u32 *bytesperline)

	*height = clamp(*height, TEGRA_MIN_HEIGHT, TEGRA_MAX_HEIGHT);

Hi Jerry,
Thanks for your response. The pixel formats reported by v4l2 standard controls.
v4l2-ctl -d /dev/video0 --list-formats-ext
Index : 0
Type : Video Capture
Pixel Format: ‘RGGB’
Name : 8-bit Bayer RGRG/GBGB
Size: Discrete 256x1
Interval: Discrete 0.250s (4.000 fps)
Jerry I am getting two types of register error with frame syncpt error. When am running my custom sensor in continous clock, register reading Multi bit SOT error(TEGRA_CSI_CILX_STATUS: 0x00020020) with frame syncpt timeout and next scenario is when am running my custom sensor in discontinous clock register reading control clock error(TEGRA_CSI_CILX_STATUS: 0x00000001) with frame syncpt timeout.
Jerry could you give me any idea where will be the issue.

Thank you so much for your support.

hello AlbinRaj0,

I would also like to emphasize that we don’t have such camera sensor to output single row image for verification.

in theory, you should assign the capture settings according to the image formats reported by v4l2 standard controls. however, as you can see from post #36, there’s limitation of common settings.

please have a try to update VI driver to remove minimum height definition for testing.

Hi Jerry,
Thanks for your response. There is no change in error after reducing minimum height to 1 in VI driver.
Jerry, Is it possible to read that error packet and print in dmesg before it get discard. Is there anyway to print that error packet.

Thanks for support.

I have updated LP clock(cil_clk_mhz= 50) in csi2_fop.c driver file as 50MHz as per our sensor spec. After that, SOT multi bit error in data_lane 0 got cleared and now it’s generating error only in data_lane 1. I have attached driver and dmesg screen shot.
My doubt is while we are changing cil_clk_mhz = 50, do we need to change csi_clk_mhz as 50MHZ or not?
If so, how to calculate csi_clk_mhz?


hello AlbinRaj0,

it’s defined by TEGRA_CLOCK_CSI_PORT_MAX in the vi2_registers.h,
for example,