As mentioned in a previous question (Image sequence numbers incrementing even if no image captured) we successfully use a camera connected to the Jetson Nano using a minimal device tree without a camera mode section. We use an external pixel clock and control the sensor directly using a user-space library.
As also noted we have seen several new problems with JetPack 4.6.1 (and 4.6.2) that were not there in Jetpack 4.6. One of these is that the timestamps of captured images vary significantly in JP 4.6.1 but were fairly stable in JP 4.6. One reason for this appears to be the removal of the Frame Start syncpt within the function tegra_channel_capture_frame_multi_thread (in file vi2_fops.c).
This means checking SOF in the capture thread doesn’t help avoid a failure
in the release thread. Hence we can simplify the capture thread to program only
the capture released settings, skip the checking of SOF and leave the checking
of vi/csi status and the recovery process to be done in the release thread.
With my limited knowledge of the hardware involved this appears to be a good explanation of the change. Thank you for that.
My second question still remains: Is there anyway to improve the regularity of the timestamp even with the syncpt removed? Perhaps I should also ask if the refactoring affected the timestamps for other cameras too, or whether we are the only ones observing this?
As mentioned in Image sequence numbers incrementing even if no image captured we are using a Sony IMX567. We set up a V4L subdevice that gives us access to the sensor registers via custom V4L ioctls. The complete register initialisation is done in user space so that we can set up different resolutions and frame rates. We then use the standard video device e.g. /dev/video0 to access the image data, just like a normal V4L application.
Unfortunately I am unable to reproduce the timestamp problem using a RaspberryPi camera and v4l2-ctl at the moment, but, in theory at least, in JetPack 4.6.1 the buffer timestamp no longer strictly conforms to the v4l definition (3.6. Buffers — The Linux Kernel documentation) which states:
For capture streams this is time when the first data byte was captured, as returned by the clock_gettime() function for the relevant clock id;
This is because, since refactoring the low-latency mode, you no longer have a SOF syncpt and so cannot be sure when the first data byte has been captured. Instead, you are now taking the time stamp from just before the frame is enqueued in the hardware. The frame will always start sometime later but you can no longer know how much later.
I will keep trying to find a way of reproducing the problem in a more standard way.