Clocks: Linux System Time vs TSC vs RTC

We have several questions regarding the relationship between linux system time vs. TSC vs. RTC.
Our observations based on the attached script are as follows:

  • It seems that linux system time is completely independent from
    TSC time. Are TSC clock values used to calculate the system time?

  • The RTC looks like it is pretty exactly 500ms off even after sync. Is that on purpose or is there some error somewhere?

  • What clock is used to calculate linux system time? If it would use TSC is there a formula and what are the parameters?

  • The manual states that the TSC clock should run at 31.25Mhz. However, in our experiments the least drift between linux system time and the clock was at 31.249Mhz. Is that the expected accuracy of the clock or is there something else wrong?

You should be able to replicate the experiments on your Xavier with the attached script.

Run as follows:

sudo watch -n .1 ./test.sh

Script test.sh

#/bin/bash
#get all the times
#read rtc via registers (not via date)
#first read millisecs. This causes update of shadow secs. Read those next
#read shadow secs
rtcdmsec1=$(printf "%d\n" $(busybox devmem 0x0c2a0010 W))
rtcdsec1=$(printf "%d\n" $(busybox devmem 0x0c2a000c W))
#Read higher bits
list2=$(busybox devmem 0x03010004 W)
#read lower bits andrtc
nsecsrtc1=$(date +%s%N)
list=$(busybox devmem 0x03010000 W)
nsecsrtc2=$(date +%s%N)
rtcdmsec2=$(printf "%d\n" $(busybox devmem 0x0c2a0010 W))
rtcdsec2=$(printf "%d\n" $(busybox devmem 0x0c2a000c W))
#Convert time to RTC
rtctime1=$(echo "($rtcdsec1 * 1000 + $rtcdmsec1)" | bc)
rtctime2=$(echo "($rtcdsec2 * 1000 + $rtcdmsec2)" | bc)
rtctime=$(echo "($rtctime1 + $rtctime2)/2" |bc)
rtctimesec=$(echo "$rtctime/1000" | bc)
#average to account for scheduling and other delays
nsecsrtc=$(echo "($nsecsrtc1+$nsecsrtc2)/2" | bc)

rtcdiff=$(echo "$rtctime - $nsecsrtc/1000000" | bc)

convs=$(printf "%d\n" $list)
convs2=$(printf "%d\n" $list2)
#shift over bits
higherbits=$(echo $(($convs2<<32)))
decn=$(echo "($higherbits + $convs)/31.249" |bc)
secs=$(echo "$decn/1000000.0" | bc)
secsrtc=$(echo "$nsecsrtc/1000000000" |bc)
delta=$(echo "(-($decn-$nsecsrtc/1000))"|bc)
deltams=$(echo "($delta/1000)"|bc)
systimems=$(echo "$nsecsrtc / 1000000" | bc)
echo "Difference between TKE_AON_SHARED_TKETSC0_0 /31.249 (is supposed to be 31.25) and linux clock."
echo "RTC Direct time (millisec): $rtctime\t (secs) $rtctimesec"
echo "System time     (millisec): $systimems\t (secs) $secsrtc"
echo "Theoretically RTC and system time could line up but typically they can be quite different"
echo "System time vs. RTC diff (millisec): $rtcdiff"
echo "TSC:Microsecs:  $decn\t Secs: $secs \n"
echo "Difference between the system time and TSC:\nDelta (Microsec): $delta\tMillisecs $deltams"

Expected output:

Every 0.1s: ./test.sh                           xavier: Tue Sep  3 17:49:11 2019

Difference between TKE_AON_SHARED_TKETSC0_0 /31.249 (is supposed to be 31.25) an
d linux clock.
RTC Direct time (millisec): 1567547351668        (secs) 1567547351
System time     (millisec): 1567547351186        (secs) 1567547351
Theoretically RTC and system time could line up but typically they can be quite
different
System time vs. RTC diff (millisec): 482
TSC:Microsecs:  239245444392     Secs: 239245

Difference between the system time and TSC:
Delta (Microsec): 1567308105742004      Millisecs 1567308105742

After running for a couple of days here is a more precise factor:
31.24851168 Mhz
Is this expected?

system time( date command) is rtc time.
but date application may not go to HW to get the time from RTC. It has its own userspace way of maintaining the time.

TSC is a free running timer which stat during boot , 7 seconds before kernel boot. that offset will always remain. To use TSC you should use clock_get_time(monotinic) api.
The normal way Linux maintains time counter like CLOCK_REALTIME or CLOCK_MONOTONIC does depend on the TSC rate, the origin is however arbitrary, so there is always at least an relatively large offset between the two.

As for drift, this depends on the use or not of NTP (or more generally of any method that locks the Linux clock on an external time reference). You basically have two cases:

  • No NTP, then TSC is 31.25 MHz exactly as measured by the Linux system time and Linux time and TSC do not drift, this is because there is no other reference, and the Linux realtime itself will drift against an external reference
  • NTP active, then TSC will appear to drift against Linux time, by an amount dependent on the instantaneous accuracy of the crystal, unless you use the fine tuning capabilities built inside TSC, basically you could have a SW level lock against an external reference to also make TSC appear to be exactly 31.25 MHz and also implicitly locked to the Linux time in that case. Observing 31.249 MHz corresponds to a frequency error of 32 ppm, this is not an impossible value, you could expect to observe any value between 31.25 +/- 50 ppm (the exact range depends on the quality of the crystal used, combined with aging and temperature effects).
    o The situation gets more complicated if you use SC7 and/or lock TSC on the 32,768Hz clock

RTC is normally synchronous with the 32,768Hz clock, and so will drift against both TSC and Linux time (32,768 Hz reference is independent of the crystal reference) and is automatically started at cold boot reset, so TSC and RTC have both a static offset and a drift against each other.

We will check more on the date application and report back.
Meanwhile request you to use clock_get_time to measure the time.