I create driver for MIPI camera and I can get 1 frame via v4l2-ctl. The next 2-3-4 frames are black (all zeros), then I get “normal” frame and 3 “black” frames, etc…
So basically - 1 normal frame, then 3 black frames and so on.
I don’t have an idea where I have to look.
Can you please show me direction or give any ideas?
This error “MW_ACK_DONE syncpoint time out!0” is for last frame (no matter how long running, I tested it for few hours and this error was only always in the end).
Example output of v4l2-ctl:
good frames are always with “Index : 0”
“Index : 1,2,3” - black frames
VIDIOC_QUERYCAP: ok
VIDIOC_S_EXT_CTRLS: ok
VIDIOC_G_FMT: ok
VIDIOC_S_FMT: ok
Format Video Capture:
Width/Height : 3840/2160
Pixel Format : 'RG10'
Field : None
Bytes per Line : 7680
Size Image : 16588800
Colorspace : sRGB
Transfer Function : Default (maps to sRGB)
YCbCr/HSV Encoding: Default (maps to ITU-R 601)
Quantization : Default (maps to Full Range)
Flags :
VIDIOC_REQBUFS: ok
VIDIOC_QUERYBUF: ok
VIDIOC_QBUF: ok
VIDIOC_QUERYBUF: ok
VIDIOC_QBUF: ok
VIDIOC_QUERYBUF: ok
VIDIOC_QBUF: ok
VIDIOC_QUERYBUF: ok
VIDIOC_QBUF: ok
VIDIOC_STREAMON: ok
Index : 0
Type : Video Capture
Flags : mapped
Field : None
Sequence : 0
Length : 16588800
Bytesused: 16588800
Timestamp: 761.362582s (Monotonic, End-of-Frame)
Index : 1
Type : Video Capture
Flags : mapped
Field : None
Sequence : 1
Length : 16588800
Bytesused: 16588800
Timestamp: 761.382855s (Monotonic, End-of-Frame)
Index : 2
Type : Video Capture
Flags : mapped
Field : None
Sequence : 2
Length : 16588800
Bytesused: 16588800
Timestamp: 761.403138s (Monotonic, End-of-Frame)
Index : 3
Type : Video Capture
Flags : mapped
Field : None
Sequence : 3
Length : 16588800
Bytesused: 16588800
Timestamp: 761.423441s (Monotonic, End-of-Frame)
Index : 0
Type : Video Capture
Flags : mapped
Field : None
Sequence : 4
Length : 16588800
Bytesused: 16588800
Timestamp: 761.443719s (Monotonic, End-of-Frame)
Index : 1
Type : Video Capture
Flags : mapped
Field : None
Sequence : 5
Length : 16588800
Bytesused: 16588800
Timestamp: 761.484292s (Monotonic, End-of-Frame)
Index : 2
Type : Video Capture
Flags : mapped
Field : None
Sequence : 6
Length : 16588800
Bytesused: 16588800
Timestamp: 761.524857s (Monotonic, End-of-Frame)
Index : 3
Type : Video Capture
Flags : mapped
Field : None
Sequence : 7
Length : 16588800
Bytesused: 16588800
Timestamp: 761.565422s (Monotonic, End-of-Frame)
how you examine each capture buffer, did you dump every frame to review the content?
please also have a try as following, what’s the reported fps below pipeline? $ v4l2-ctl -d /dev/video0 --set-fmt-video=width=3840,height=2160,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100
how you examine each capture buffer, did you dump every frame to review the content?
Yes, I examine raw dump. File contains all frames with length 16588800 bytes of each.
please also have a try as following, what’s the reported fps below pipeline? $ v4l2-ctl -d /dev/video0 --set-fmt-video=width=3840,height=2160,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100
Depends on H/V-MAX settings, I tried many values. Result: 5 FPS to 50 FPS.
In all cases 1 frame is OK, then 3 black frames and then 1-3, 1-3 and so on.
would like to double confirm this comment.
don’t you have single set of H/V-MAX settings for this 3840x2160 sensor mode?
this v4l2-ctl sample commands try to access stream for 100 frame counts, and the reported frame-rate should be consistency.
don’t you have single set of H/V-MAX settings for this 3840x2160 sensor mode?
this v4l2-ctl sample commands try to access stream for 100 frame counts, and the reported frame-rate should be consistency.
I have single set of settings for this mode.
Right now I have dummy “set_gain”, “set_exposure”, “set_frame_rate” (all return 0), so to change frame rate I manually change (H/V MAX) settings and recompile driver. I tried different values and get different FPS with v4l2-ctl.
To confirm - frame-rate is always constant.
it seems like vblank is too long, so you’re not catching frames.
may I know who’s the sensor vendor? since there’s no failure on VI driver side, is it possible to ask camera vendor to review your settings?
it seems like vblank is too long, so you’re not catching frames.
If I understand it correctly, If vblank is too long, frame-rate will low low also. Is it correct?
When I change VMAX then framerate changes, but ratio (1 good frame - 3 black frames) remains the same. Only if I change v-max to set 1 FPS, then all frames are good.
I also tried to change it to 12Bit output - all the same (1 frame is good, 3 black and so on).
may I know who’s the sensor vendor? since there’s no failure on VI driver side, is it possible to ask camera vendor to review your settings?
It’s directly from SONY.
Sony is not very helpfully :) They provide docs and that’s all.
I double checked registers. Everything is correct.
let’s look into VI driver, please adding some debug prints to analysis the issue.
for example, $public_sources/kenrel_Src/kernel/nvidia/drivers/media/platform/tegra/camera/vi/vi2_fops.c
there’s sync-point to wait for sensor signal. it waits till there’s start-of-frame.
also. $public_sources/kenrel_Src/kernel/nvidia/drivers/media/platform/tegra/camera/vi/channel.c
this is the internal buffer pool. it sends the capture buffer to user-space.
void free_ring_buffers(struct tegra_channel *chan, int frames)
{
...
vb2_buffer_done(&vbuf->vb2_buf,
chan->buffer_state[chan->free_index++]);
please add debug prints for these two paragraph, you may also include timestamps for checking fps.
let’s assume it’s running at 4-fps.
according to your bug description, you should always got one validate frame within a second.
since there’s no kernel failures. please examine how many frames (SOF signal) is received by VI driver.
thanks
is that the logs with frame-rate configured as 6-fps?
if that’s correct, it means sensor sends black frames.
BTW, since you said there’re three black frames follow-up valid frames.
it’s chan->sequence as frame index, you may have an alternative way to drop those black frames for testing.
I have an FPGA that aggregates the images of multiple IMX258 cameras into a single image. The FPGA is connected to a Jetson Nano on the CSI A/B interface. I try to capture the images using this command:
I see the same pattern of good/bad frames as prapor:
Frames with indices 0, 4, 8, … are fine.
Frames with the indices in between (1, 2, 3, 5, 6, 7, …) are just black.
The output of the command is:
VIDIOC_QUERYCAP: ok
VIDIOC_G_FMT: ok
VIDIOC_S_FMT: ok
Format Video Capture:
Width/Height : 4096/6400
Pixel Format : 'Y16 ' (16-bit Greyscale)
Field : None
Bytes per Line : 8192
Size Image : 52428800
Colorspace : sRGB
Transfer Function : Default (maps to sRGB)
YCbCr/HSV Encoding: Default (maps to ITU-R 601)
Quantization : Default (maps to Full Range)
Flags :
VIDIOC_REQBUFS returned 0 (Success)
VIDIOC_QUERYBUF returned 0 (Success)
VIDIOC_QUERYBUF returned 0 (Success)
VIDIOC_QUERYBUF returned 0 (Success)
VIDIOC_QUERYBUF returned 0 (Success)
VIDIOC_QBUF returned 0 (Success)
VIDIOC_QBUF returned 0 (Success)
VIDIOC_QBUF returned 0 (Success)
VIDIOC_QBUF returned 0 (Success)
VIDIOC_STREAMON returned 0 (Success)
cap dqbuf: 0 seq: 0 bytesused: 52428800 ts: 37.054025 (ts-monotonic, ts-src-eof)
cap dqbuf: 1 seq: 1 bytesused: 52428800 ts: 37.125391 delta: 71.366 ms (ts-monotonic, ts-src-eof)
cap dqbuf: 2 seq: 2 bytesused: 52428800 ts: 37.196856 delta: 71.465 ms (ts-monotonic, ts-src-eof)
cap dqbuf: 3 seq: 3 bytesused: 52428800 ts: 37.268295 delta: 71.439 ms (ts-monotonic, ts-src-eof)
cap dqbuf: 0 seq: 4 bytesused: 52428800 ts: 37.339759 delta: 71.464 ms fps: 14.00 (ts-monotonic, ts-src-eof)
The corresponding mode entry in the device tree is:
The theoretical maximum is 22fps (limited not by the sensor but the MIPI bandwidth).
At the moment, I’m testing at 14fps since the FPGA cannot go faster. I also tried 5 and 10fps with the same pattern of good and bad frames. I set max_framerate = "22000000"; now but it did change the results.
I figured out that the issue occurs if the FPGA sends fewer image lines pre frame than the driver expects.
If we are sending an image with e.g. 4096x3200 resolution, we have the pattern of good/bad frames. If we send the full 4096x6400 resolution, all frames are fine.
I checked the Tegra_X1_TRM and saw this in Chapter 29.8.3 - CASE 1:
If this frame was to be sent to memory, the VI unit will not detect that it is not the right frame and will not raise the sync point.
When it receives the next frame from the CSI unit, it will overwrite the memory buffer (at the buffer location pointed by previous
frame parameter).
So in the case of too few lines, the VI will always write to the same memory buffer instead of switching to the next buffer from the ringbuffer. Since there are 4 framebuffers in the ringbuffer, this would explain the good/bad frame pattern I see.
Is there a way to configure the behavior of the VI unit and make it write the incomplete frames to the correct memory buffers?
Or, as an alternative, is there a way to re-configure the expected frame resolution “on-the-fly” without stopping and re-starting the streaming.
In my case, width of picture that driver expect was 8 pixel more. Sensor has send 3856, but driver expect 3864 and this is why pattern (1 good frame, 3 bad frames) repeats.
Since I need to change the image resolution while streaming, I cannot configure it statically in the device tree.
In the TRM, I saw that the CSI unit is capable of adding black pixels to pad under-sized frames to match the expected frame size.
The respective fields of the register CSI_PIXEL_STREAM_A_CONTROL0_0 are CSI_PPA_PAD_FRAME and CSI_PPA_PAD_SHORT_LINE. I adapted the CSI driver to enable the padding.