I did an experiment: open the trace log to get the sof timestamp while the camera took the stopwatch time.
Through the cat/sys/kernel/debug/tracing/trace | grep ‘tegra_channel_capture_frame: sof’
The timestamps of the 3 camera synchronization in the obtained log all have a deviation of 10ms level.
However, the actual time deviation of camera acquisition is < 1ms.
Because I use GMSL YUYV format (ISP), cannot use Argus to capture images, unable to use in the API getSyncSensorTimestampTsc ()
How can we get a more accurate timestamp?
We use L4T 35.1
In a real application, we can’t always use the camera to observe the stopwatch, if one camera is out of sync while running, we can’t monitor and check
actually, those SOFs timestamps were units in ns. do you see any frame deviation errors in real practice?
In reality, the frame rate is stable at 30fps. I am aware that the unit of SOF is in ns, and when calculated in ns, their deviation is in the range of 10ms.
If such a deviation occurs, it will be discernible in the image.
Is this deviation caused by timing discrepancies in the timestamps obtained during VI capture due to thread context switching?
Is there any method to achieve a deviation of less than 1ms?
may I know what’s your sample pipeline for launching these three cameras?
assume you’re using three instance, it may caused by system config.
in order to narrow down the issue,
please try system configuration to ensure all process could obtain CPU resources.
you could use
renice to modify the process priority,
# to modify the priority,
# please note its ranges from -20 (highest priority) to 19 (lowest priority value),
# the default is 0.
$ sudo renice -20 -p <pid>
you could also using
taskset to assign specific CPUs for running app individually.
# here's sample to specify CPU-2 and CPU-3 for executing <pid> application.
$ taskset -c -p 2,3 <pid>
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.