We have Jetson TX2, we are planning to interface it with an FPGA which converts incoming LVDS video from camera to MIPI CSI-2. We have verified that the output is as per MIPI specification.
Currently, due to some constraint, we want to increase Ths-exit (Time that the transmitter drives LP-11 following a HS burst.), i.e after each Frame Start, Line End and Frame End, we would like to increase Ths-exit.
We would like to know, is there a time limit for Jetson receiver for Ths-exit after which Jetson MIPI receiver stops receiving or maybe gives an error? or we can keep any value we want.
cil_settletime is used to configure THS settle and if we set it to 0, it will auto calibrate. THS settle is when the HS receiver shall ignore any Data Lane HS transitions. However, I would like to know about THS exit. THS exit is the time the transmitter drives LP-11 following a HS burst. So, after THS exit, we have LP-11 state till next data is transmitted. I wanted to know if I can drive LP-11 for an extended period of time after THS exit because in MIPI CSI-2 documentation, I don’t see any limit in driving transmitter to LP-11 state for extended period of time.
For better understanding of my use case below is my application:
I have incoming video data, whose clock frequency is x Mhz (1 clock cycle is 1 pixel data), due to some constraint, my CSI-2 transmiter IP cannot go below 2x Mhz. Hence, I have created a line buffer, wherein, I acquire a Line and store it in a buffer and then transmit it and then again wait to acquire a line and then again transmit it. This is how I would like to implement:
Acquire one Line and store in Line buffer (640Pixels in my case)
Send SOF
↓
Transmit one Line from Line buffer
↓
Wait to acquire one Line (LP-11 state till one line is acquired i.e. LP-11 will be in this state for an extended period of time, about 640*(1/xMhz))
↓
Transmit one Line from Line buffer
↓
Wait to acquire one Line from input (transmitter sends LP-11 state till one line is acquired i.e. LP-11 will be in this state for an extended period of time, about 640*(1/xMhz))
↓
Transmit one Line from Line buffer
↓
Same cycle continue till all Lines are sent
↓
Send EOF
it may cause a timeout, but you’re able to configure the larger timeout value within the driver to avoid this.
the timeout is by default configured as 200ms. i.e. chan->timeout = msecs_to_jiffies(200);
here’s code snippet for VI driver waiting for start-of-frame.
for example, $public_sources/kernel_src/kernel/nvidia/drivers/media/platform/tegra/camera/vi/vi4_fops.c
In $public_sources/kernel_src/kernel/nvidia/drivers/media/platform/tegra/camera/vi/vi4_fops.c
I see nvhost_syncpt_wait_timeout_ext at two places, Frame Start and Frame End. My question is what happens if there is more delay between the lines?
Resolution of each frame is 640x480 and the pixel clock frequency is 10Mhz right now, so for each line I can expect my transmitter to be in LP-11 state for 640*(1/10,000,000) = 64 micro second which is less than the timeout of 200ms discussed above, so we should have no problem right?
may I have more details about… Transmit one Line from Line buffer
it’s Start-of-Transmission (SoT) for sending frames. did you meant you only sending one-line at a time?
We have an LVDS video out interface whose pixel clock frequency is 10MHz, due to some constraint, my CSI-2 IP does not allow byte clock frequency to go below 20MHz. So we are using a FIFO in my FPGA.
When we get VSYNC, we listen for HSYNC and once we have HSYNC, We store the pixel data in FIFO and once all the pixel data of a line is acquired i.e 640 pixels, we generate a flag that one line is acquired.
On the transmitter side, after reset or boot up, we send SoF, 2 and then wait for line acquired flag to go up (we cannot directly send incoming data from LVDS because transmitter clock is 20MHz, so we acquire one line first and then send all of it. Also at this point we have LP11 state). Once we have the line acquired flag, the transmitter logic in my FPGA sends one line of data i.e. 640 pixels. and then again wait for next line. When all the lines are sent i.e 480 lines, we send EoF and the cycle continues.
So coming to your question, yes, we send SoF, (wait for one line to get acquired in FIFO, which is approximately 640pixels *1/10Mhz = 64 microseconds), Send one Line, Wait for ~64 microseconds, send second Line, Wait for ~64 microseconds, …, send 480th line, EoF
When it is waiting for ~64 microseconds, we are in LP11 state and hence my question that Jetson receiver will timeout or not if LP11 is in this state for ~64 microseconds.
to be honest, I don’t know.
I’ve never experience with sensor to sending lines with a pause.
may I also know what’s the final goal for using this frame?
for example, per Camera Architecture Stack, may I know the pipeline for processing it?
this should be done with V4L2 application that uses direct kernel IOCTL calls, it shall timeout via libargus since it’ll take 64ms* 480lines ~= 30720ms for one complete capture frame.
so…
is it possible to give it a try, and gathering the kernel message for reference?