We have a CSI-2 camera sensor connected to video0 node on the Jetson Xavier NX. We are using L4T 32.5 (Kernel 4.9.201).
We are configuring on the application level for capturing 10 frames (or any number of frames) and application is running properly.
We have added some debug logs in the vi5 driver to trace the frame timestamp and sequence number.
We are observing that the first 2 frames are dropped (Or getting corrupted) and the 3rd frame has 0 timestamp and from 4th frame we are getting proper updated frames with timestamp.
Please provide us any input on resolving this issue.
Log : jetson-Pl.log (30.0 KB)
Do you any inputs on this issue?
The first two frames are being skipped and getting effective data from 3rd frame but with 0 timestamp. We are using v4l2 framework to capture.
Is there any delay between sensor streaming frames and VI drivers receiving these frames? We tried comparing the v4l2 buffer flags with V4L2_BUF_FLAG_ERROR and seems the frames getting corrupted. errorlogs.txt (48.3 KB)
there’s Argus FIFOs.
from Argus side when ISP is in use, our user space driver internally takes care of initiating extra 2 captures for sensor exposure programming when Argus and hence underneath driver receives first capture request from client.
these 2 internal captures were ignored at driver level and are not sent to Argus or client, so this way client receives the same output captures which was requested.
Although sensor would have captured 3 frames but first 2 frames might be with incorrect exposure settings.
Thanks for your response.
We are not using nvargus method to capture the frames. We are using V4l2 framework to capture.
V4l2 Buffer index 0 will always have the third frame, not sure where the first two frames are being dropped. We have confirmed that the Jetson Xavier NX receives these N frames after probing the CSI-2 data lines, but when we try to receive these N frames in the application we receive N-2 frames.
With first boot 3rd frame has proper updated timestamp, but if we run our application again it has 0 timestamp.
We are using a radar device which is giving the CSI-2 data. Since it is radar data, the first 2 frames are also needed.
Can you point to the code snippet where this N+2 frame start happens and 2 frames are being skipped?
please dig into these two kernel sources,
for example, $public_sources/kernel_src/kernel/nvidia/drivers/media/platform/tegra/camera/vi/channel.c $public_sources/kernel_src/kernel/nvidia/drivers/media/platform/tegra/camera/vi/vi5_fops.c.
Xavier series is using VI-5, there’re two threads for capture a frame buffer.
I am going through these driver files.
Kthreads start capture enqueue and kthreads capture dequeue happens normally. All the allocated buffers will be in queue, calls enqueue and as soon as these buffers are filled with data, they will dequeued.
Enqueue and dequeue happens parallelly. The buffers will be enqueued and dequeued and released if they are not having any data.
Once the buffers receive valid data(from 3rd frame) they will be dequeued and proper timestamp will be updated (Kernel and application).
But still not be able to figure out where the 2 frames are being dropped. Can you please point us where this seems to be happening?
I noticed this on your logs: tag:CSIMUX_STREAM channel:0x00 frame:0 vi_tstamp:11862577318 data:0x00000100
Which means SPURIOUS_DATA_STREAM_2: it means that the VI sees some other packets before FS packet, as VI always expect the 1st packet to be frame start.
Based on this, what format are you using? Are you defining embedded metadata different than zero on your dtb? Did you try to boost your pixel_clk_hz?
I am using “bayer_bggr16” format. I am defining embedded_metadata_height as “0”;
I have not tried boosting pixel_clk_hz.
I am using pix_clk_hz = “155000000”;
Attached my device tree, tegra194-radar-my.dtsi (5.9 KB)
Maybe you should change pixel_t because it is deprecated. You can use this instead: mode_type, csi_pixel_bit_depth, and pixel_phase.
Again about your pixel_t, I’m not sure if bayer_bggr16 is supported. You can check that within the extract_pixel_format function, which is located at: $SOURCES_DIRECTORY/kernel/nvidia/drivers/media/platform/tegra/camera/sensor_common.c.
We are using FPGA for LVDS to CSI2 conversion and pixel_clk_hz is properly configured.
data rate is - 1240Mbps * 2 (num lanes) / 16 bit = 155Mhz.
FPGA does not support mclk. Is it still okay to specify mclk?
I gave a try with the below settings, but nothing changed, same observations.
mclk_khz = “24000”;
mclk_multiplier = “9.33”;
pix_clk_hz = “182400000”;
(taking IMX219 sensor as ref)
This statement means that the first 2 frames will be dropped? If there is incorrect exposure settings, will these frames are marked with v4l2 error flag?
And we are not using libargus, we use v4l2 to capture.