Currently, we are developing a time-synchronized sensing system. The main components of the system are:
ROSCube (, which is based on Jetson AGX Xavier and L4T32.6.1) from ADLINK Technology, Inc.
SONY IMX490-based trigger-sync camera, running in external pulse sync mode (i.e., the camera starts read-out according to the pulse signal emitted from the ROSCube)
To check timing consistency and accuracy, we took kernel traces using the monotonic raw clock. The following is an extract of a part of the trace:
According 3.6. Buffers — The Linux Kernel documentation, the flag TIMESTAMP_MONOTONIC means that the v4l2 timestamp ( timestamp = 5443810588000at line#4) is represented in a monotonic clock scale. However, there is a huge difference (approx. 29 seconds) between the v4l2 timestamp and the trace timestamp ( 5414.687813)
We found related topics (Argus Timestamp Domain) in this forum, which describes a formula clock_gettime(MONOTONIC_RAW) = cycle_ns(TSC) - offset_ns. Because the contents of sys/devices/system/clocksource/clocksource0/offset_ns was 29152770368 when we took the above trace, the v4l2 timestamp looks become similar scale if we subtract offset_ns from it (i.e., 5443810588000 - 29152770368 = 5414657817632[ns] = 5414.657817632 [sec]. However, the result is past one from the trigger time (5414.660886at line#1), which is weird because the camera should output images exactly after triggering.
So our questions are:
It looks like the V4L2 timestamp is generated based on the arrival of the FS packet (CHANSEL_PXL_SOF). If we do (vi_tstamp)170119080898 * 32, it becomes 5443810588736 which is the V4L2 timestamp. Is our understanding correct?
What should we do to convert from the v4l2 timestamp to a system clock, such as a monotonic clock?
FYI: We set the following to get the kernel trace:
Thank you very much for your answer. However, we still couldn’t solve our issue…
Let me ask a few additional questions:
We understood (MONOTONIC_RAW) = cycle_ns(TSC) - offset_ns, but how much accuracy we can expect? Is that a micro-second order or a nano-second order?
It looks like we can get cycle_ns(TSC) by calculating vi_timestamp * 32 but what’s the meaning of 32?
We can change the CPU’s frequency by setting power mode. Does this have an influence on the calculation of timestamps? Is there anything we should additionally consider?
Thank you for the suggestion and sorry for our late reply. I’m a colleague of Yuichi and tackling the problem of this topic together.
According to your suggestion, we checked the values of clock_gettime(MONOTONIC_RAW) and compared them with the timestamp in the kernel trace. The pseudo-code of sensor-trigger is as follows:
clock_gettime(MONOTONIC_RAW, &sample);
set_gpio(HIGH); # trigger
std::cerr << sample << std::endl; # print values after triggering so that I/O latency is not included in the kernel trace
The attachment is the comparison result and it plots the difference between the printed value of clock_gettime(MONOTONIC_RAW) and the timestamp in the kernel trace for the correspondent triggering. Because the result shows the difference is approximately around 0.02 msec, we believe the timestamps in the kernel trace surely represent the value in MONOTONIC_RAW clock.
Is there a possibility that changes offset_ns (or need to consider other factors) according to the power mode of the jetson to be used? This question is because I imagine offset_ns is determined during the OS boot and the frequencies of some components, including CPUs, are set after OS booting.
If you come up with other causes of timestamp mismatching, we really appreciate to share them with us.
Hi @ShaneCCC
This is just a friendly reminder that we are still looking forward to hearing from you.
Because timestamping is essential for time-synchronized sensing systems, we would really appreciate it if you could share updates with us if you have.
The comparison result shows that sof_timestamp in vi5_fops.c and timestamp in trace log contain the same values.
Are there other items that we can check regarding this issue by any chance?
@ShaneCCC Thank you for the comment.
I guess we investigated the GPIO trigger time in this previous post. To me, the trigger time in kernel trace looks surely written in MONOTONIC_RAW time.
If I misunderstood your suggestion, I’d appreciate it if you let me know.
Thanks.
@ShaneCCC
Thanks for your supplement. I conducted an experiment as follows:
Adding clock_gettime(MONOTONIC_RAW) and printing the value in our triggering module in a similar way I did in the past this post
At the same time, SoF time values in vi5_fops.c are printed in the system log ( dmesg)
Compare the value from 1. (hereinafter referred to as “trigger_time” ) and 2. (hereinafter referred to as “SoF”) using /sys/devices/system/clocksource/clocksource0/offset_ns (“offset_ns”):
We calculated the above diff for a 1-minute measurement (total 59 frames) and the attachment is the resulting plot. As the result shows, all diff are negative values, which means our question still exists…
.
@ShaneCCC
Thank you for your pointing out. You are completely right. Although the absolute value of the observed delay looks quite larger than expected, I think it is another issue, so I’d like to close this thread. I really appreciate your help!